Aug 13 00:48:57.867825 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:48:57.867883 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:48:57.867893 kernel: BIOS-provided physical RAM map: Aug 13 00:48:57.867907 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 00:48:57.867917 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 00:48:57.867926 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:48:57.867936 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 00:48:57.867946 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 00:48:57.867955 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:48:57.867964 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:48:57.867972 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:48:57.867978 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:48:57.867990 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 00:48:57.868000 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:48:57.870202 kernel: NX (Execute Disable) protection: active Aug 13 00:48:57.870211 kernel: APIC: Static calls initialized Aug 13 00:48:57.870217 kernel: SMBIOS 2.8 present. Aug 13 00:48:57.870226 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 00:48:57.870232 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:48:57.870238 kernel: Hypervisor detected: KVM Aug 13 00:48:57.870244 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:48:57.870250 kernel: kvm-clock: using sched offset of 5815325290 cycles Aug 13 00:48:57.870257 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:48:57.870263 kernel: tsc: Detected 2000.000 MHz processor Aug 13 00:48:57.870270 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:48:57.870276 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:48:57.870282 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 00:48:57.870291 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:48:57.870297 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:48:57.870303 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 00:48:57.870309 kernel: Using GB pages for direct mapping Aug 13 00:48:57.870315 kernel: ACPI: Early table checksum verification disabled Aug 13 00:48:57.870321 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 00:48:57.870327 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:48:57.870334 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:48:57.870340 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:48:57.870348 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:48:57.870354 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:48:57.870360 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:48:57.870366 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:48:57.870375 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:48:57.870381 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 00:48:57.870389 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 00:48:57.870396 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:48:57.870402 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 00:48:57.870409 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 00:48:57.870415 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 00:48:57.870421 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 00:48:57.870428 kernel: No NUMA configuration found Aug 13 00:48:57.870434 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 00:48:57.870442 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Aug 13 00:48:57.870448 kernel: Zone ranges: Aug 13 00:48:57.870455 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:48:57.870461 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:48:57.870467 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:48:57.870474 kernel: Device empty Aug 13 00:48:57.870480 kernel: Movable zone start for each node Aug 13 00:48:57.870487 kernel: Early memory node ranges Aug 13 00:48:57.870493 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:48:57.870503 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 00:48:57.870517 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:48:57.870529 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 00:48:57.870540 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:48:57.870551 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:48:57.870563 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 00:48:57.870574 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:48:57.870584 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:48:57.870594 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:48:57.870601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:48:57.870610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:48:57.870616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:48:57.870623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:48:57.870629 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:48:57.870636 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:48:57.870642 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:48:57.870648 kernel: TSC deadline timer available Aug 13 00:48:57.870654 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:48:57.870661 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:48:57.870669 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:48:57.870675 kernel: CPU topo: Max. threads per core: 1 Aug 13 00:48:57.870682 kernel: CPU topo: Num. cores per package: 2 Aug 13 00:48:57.870688 kernel: CPU topo: Num. threads per package: 2 Aug 13 00:48:57.870694 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 00:48:57.870700 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:48:57.870707 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:48:57.870713 kernel: kvm-guest: setup PV sched yield Aug 13 00:48:57.870719 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:48:57.870727 kernel: Booting paravirtualized kernel on KVM Aug 13 00:48:57.870734 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:48:57.870740 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:48:57.870747 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 00:48:57.870753 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 00:48:57.870759 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:48:57.870766 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:48:57.870772 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:48:57.870780 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:48:57.870788 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:48:57.870795 kernel: random: crng init done Aug 13 00:48:57.870801 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:48:57.870808 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:48:57.870814 kernel: Fallback order for Node 0: 0 Aug 13 00:48:57.870820 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 00:48:57.870827 kernel: Policy zone: Normal Aug 13 00:48:57.870833 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:48:57.870841 kernel: software IO TLB: area num 2. Aug 13 00:48:57.870847 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:48:57.870854 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:48:57.870860 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:48:57.870866 kernel: Dynamic Preempt: voluntary Aug 13 00:48:57.870873 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:48:57.870880 kernel: rcu: RCU event tracing is enabled. Aug 13 00:48:57.870887 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:48:57.870893 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:48:57.870900 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:48:57.870908 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:48:57.870915 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:48:57.870921 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:48:57.870928 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:48:57.870940 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:48:57.870948 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:48:57.870955 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:48:57.870962 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:48:57.870968 kernel: Console: colour VGA+ 80x25 Aug 13 00:48:57.870975 kernel: printk: legacy console [tty0] enabled Aug 13 00:48:57.870982 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:48:57.870990 kernel: ACPI: Core revision 20240827 Aug 13 00:48:57.870997 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:48:57.871003 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:48:57.871010 kernel: x2apic enabled Aug 13 00:48:57.871017 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:48:57.871025 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 00:48:57.871032 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 00:48:57.871039 kernel: kvm-guest: setup PV IPIs Aug 13 00:48:57.871045 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:48:57.871052 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 00:48:57.871059 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 00:48:57.871066 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:48:57.871072 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:48:57.871079 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:48:57.871088 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:48:57.871094 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:48:57.871101 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:48:57.871108 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:48:57.871114 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:48:57.871121 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:48:57.871128 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 00:48:57.871135 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 00:48:57.871143 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 00:48:57.871150 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 00:48:57.871157 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:48:57.871164 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:48:57.872300 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:48:57.872311 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:48:57.872319 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 00:48:57.872325 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:48:57.872332 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 00:48:57.872343 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 00:48:57.872349 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:48:57.872356 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:48:57.872363 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:48:57.872370 kernel: landlock: Up and running. Aug 13 00:48:57.872377 kernel: SELinux: Initializing. Aug 13 00:48:57.872383 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:48:57.872390 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:48:57.872397 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 00:48:57.872406 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:48:57.872412 kernel: ... version: 0 Aug 13 00:48:57.872419 kernel: ... bit width: 48 Aug 13 00:48:57.872426 kernel: ... generic registers: 6 Aug 13 00:48:57.872433 kernel: ... value mask: 0000ffffffffffff Aug 13 00:48:57.872439 kernel: ... max period: 00007fffffffffff Aug 13 00:48:57.872446 kernel: ... fixed-purpose events: 0 Aug 13 00:48:57.872452 kernel: ... event mask: 000000000000003f Aug 13 00:48:57.872459 kernel: signal: max sigframe size: 3376 Aug 13 00:48:57.872467 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:48:57.872474 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:48:57.872482 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:48:57.872488 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:48:57.872495 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:48:57.872501 kernel: .... node #0, CPUs: #1 Aug 13 00:48:57.872508 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:48:57.872515 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 00:48:57.872522 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227288K reserved, 0K cma-reserved) Aug 13 00:48:57.872530 kernel: devtmpfs: initialized Aug 13 00:48:57.872537 kernel: x86/mm: Memory block size: 128MB Aug 13 00:48:57.872544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:48:57.872550 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:48:57.872557 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:48:57.872564 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:48:57.872570 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:48:57.872577 kernel: audit: type=2000 audit(1755046135.356:1): state=initialized audit_enabled=0 res=1 Aug 13 00:48:57.872584 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:48:57.872592 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:48:57.872599 kernel: cpuidle: using governor menu Aug 13 00:48:57.872606 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:48:57.872612 kernel: dca service started, version 1.12.1 Aug 13 00:48:57.872619 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 00:48:57.872626 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 00:48:57.872632 kernel: PCI: Using configuration type 1 for base access Aug 13 00:48:57.872639 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:48:57.872646 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:48:57.872654 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:48:57.872661 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:48:57.872668 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:48:57.872674 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:48:57.872681 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:48:57.872688 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:48:57.872694 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:48:57.872701 kernel: ACPI: Interpreter enabled Aug 13 00:48:57.872708 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:48:57.872716 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:48:57.872723 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:48:57.872730 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:48:57.872737 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:48:57.872743 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:48:57.872902 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:48:57.873016 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:48:57.873127 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:48:57.873136 kernel: PCI host bridge to bus 0000:00 Aug 13 00:48:57.873399 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:48:57.873504 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:48:57.873619 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:48:57.873910 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 00:48:57.874294 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:48:57.874397 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 00:48:57.874498 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:48:57.874624 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:48:57.874746 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 00:48:57.875002 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 00:48:57.875111 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 00:48:57.876272 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:48:57.876448 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:48:57.876570 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:48:57.876677 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 00:48:57.876782 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 00:48:57.876887 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:48:57.877000 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:48:57.877112 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 00:48:57.877245 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 00:48:57.877358 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:48:57.877464 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:48:57.877587 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:48:57.877712 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:48:57.877831 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 00:48:57.877941 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 00:48:57.878046 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 00:48:57.878158 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 00:48:57.883066 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 00:48:57.883081 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:48:57.883090 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:48:57.883097 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:48:57.883104 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:48:57.883116 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:48:57.883123 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:48:57.883130 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:48:57.883137 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:48:57.883146 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:48:57.883158 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:48:57.883189 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:48:57.883199 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:48:57.883206 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:48:57.883228 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:48:57.883244 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:48:57.883252 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:48:57.883259 kernel: iommu: Default domain type: Translated Aug 13 00:48:57.883266 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:48:57.883273 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:48:57.883280 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:48:57.883287 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 00:48:57.883294 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 00:48:57.883420 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:48:57.883528 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:48:57.883633 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:48:57.883642 kernel: vgaarb: loaded Aug 13 00:48:57.883650 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:48:57.883658 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:48:57.883665 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:48:57.883672 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:48:57.883683 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:48:57.883690 kernel: pnp: PnP ACPI init Aug 13 00:48:57.883994 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:48:57.884005 kernel: pnp: PnP ACPI: found 5 devices Aug 13 00:48:57.884013 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:48:57.884020 kernel: NET: Registered PF_INET protocol family Aug 13 00:48:57.884027 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:48:57.884035 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:48:57.884045 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:48:57.884052 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:48:57.884059 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:48:57.884067 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:48:57.884074 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:48:57.884081 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:48:57.884088 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:48:57.884096 kernel: NET: Registered PF_XDP protocol family Aug 13 00:48:57.884210 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:48:57.884312 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:48:57.884408 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:48:57.884503 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 00:48:57.884598 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:48:57.884692 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 00:48:57.884701 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:48:57.884708 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:48:57.884715 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 00:48:57.884724 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 00:48:57.884731 kernel: Initialise system trusted keyrings Aug 13 00:48:57.884738 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:48:57.884745 kernel: Key type asymmetric registered Aug 13 00:48:57.884752 kernel: Asymmetric key parser 'x509' registered Aug 13 00:48:57.884759 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:48:57.884766 kernel: io scheduler mq-deadline registered Aug 13 00:48:57.884773 kernel: io scheduler kyber registered Aug 13 00:48:57.884780 kernel: io scheduler bfq registered Aug 13 00:48:57.884789 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:48:57.884796 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:48:57.884803 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:48:57.884810 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:48:57.884817 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:48:57.884824 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:48:57.884831 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:48:57.884838 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:48:57.884969 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:48:57.884983 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:48:57.885085 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:48:57.887215 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:48:57 UTC (1755046137) Aug 13 00:48:57.887348 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:48:57.887360 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 00:48:57.887368 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:48:57.887375 kernel: Segment Routing with IPv6 Aug 13 00:48:57.887382 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:48:57.887392 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:48:57.887399 kernel: Key type dns_resolver registered Aug 13 00:48:57.887407 kernel: IPI shorthand broadcast: enabled Aug 13 00:48:57.887413 kernel: sched_clock: Marking stable (2868004000, 231578270)->(3141156910, -41574640) Aug 13 00:48:57.887420 kernel: registered taskstats version 1 Aug 13 00:48:57.887427 kernel: Loading compiled-in X.509 certificates Aug 13 00:48:57.887435 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:48:57.887442 kernel: Demotion targets for Node 0: null Aug 13 00:48:57.887449 kernel: Key type .fscrypt registered Aug 13 00:48:57.887458 kernel: Key type fscrypt-provisioning registered Aug 13 00:48:57.887465 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:48:57.887471 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:48:57.887478 kernel: ima: No architecture policies found Aug 13 00:48:57.887485 kernel: clk: Disabling unused clocks Aug 13 00:48:57.887492 kernel: Warning: unable to open an initial console. Aug 13 00:48:57.887499 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:48:57.887506 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:48:57.887513 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:48:57.887522 kernel: Run /init as init process Aug 13 00:48:57.887528 kernel: with arguments: Aug 13 00:48:57.887535 kernel: /init Aug 13 00:48:57.887542 kernel: with environment: Aug 13 00:48:57.887549 kernel: HOME=/ Aug 13 00:48:57.887567 kernel: TERM=linux Aug 13 00:48:57.887576 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:48:57.887585 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:48:57.887597 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:48:57.887605 systemd[1]: Detected virtualization kvm. Aug 13 00:48:57.887612 systemd[1]: Detected architecture x86-64. Aug 13 00:48:57.887619 systemd[1]: Running in initrd. Aug 13 00:48:57.887626 systemd[1]: No hostname configured, using default hostname. Aug 13 00:48:57.887634 systemd[1]: Hostname set to . Aug 13 00:48:57.887643 systemd[1]: Initializing machine ID from random generator. Aug 13 00:48:57.887651 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:48:57.887660 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:48:57.887667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:48:57.887675 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:48:57.887683 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:48:57.887690 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:48:57.887699 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:48:57.887707 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:48:57.887717 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:48:57.887724 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:48:57.887732 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:48:57.887739 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:48:57.887747 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:48:57.887939 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:48:57.887947 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:48:57.887954 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:48:57.887963 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:48:57.887970 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:48:57.887978 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:48:57.887985 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:48:57.887993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:48:57.888000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:48:57.888008 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:48:57.888017 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:48:57.888025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:48:57.888032 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:48:57.888040 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:48:57.888048 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:48:57.888055 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:48:57.888063 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:48:57.888072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:48:57.888079 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:48:57.888106 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 00:48:57.888127 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:48:57.888135 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:48:57.888143 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:48:57.888151 systemd-journald[206]: Journal started Aug 13 00:48:57.888184 systemd-journald[206]: Runtime Journal (/run/log/journal/70ebc25528f34bc79e2c87e036136691) is 8M, max 78.5M, 70.5M free. Aug 13 00:48:57.877485 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 00:48:57.897194 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:48:57.921204 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:48:57.925612 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 00:48:57.975565 kernel: Bridge firewalling registered Aug 13 00:48:57.974953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:48:57.976248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:48:57.977412 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:48:57.981894 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:48:57.985309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:48:57.990001 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:48:57.997114 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:48:58.001254 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:48:58.010488 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:48:58.011771 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:48:58.011953 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:48:58.016596 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:48:58.018237 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:48:58.022378 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:48:58.035214 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:48:58.064682 systemd-resolved[245]: Positive Trust Anchors: Aug 13 00:48:58.065255 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:48:58.065283 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:48:58.071319 systemd-resolved[245]: Defaulting to hostname 'linux'. Aug 13 00:48:58.072565 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:48:58.073620 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:48:58.126218 kernel: SCSI subsystem initialized Aug 13 00:48:58.135245 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:48:58.146203 kernel: iscsi: registered transport (tcp) Aug 13 00:48:58.167768 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:48:58.167965 kernel: QLogic iSCSI HBA Driver Aug 13 00:48:58.190166 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:48:58.204352 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:48:58.205941 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:48:58.256843 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:48:58.259564 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:48:58.315205 kernel: raid6: avx2x4 gen() 30437 MB/s Aug 13 00:48:58.333195 kernel: raid6: avx2x2 gen() 29819 MB/s Aug 13 00:48:58.351553 kernel: raid6: avx2x1 gen() 21430 MB/s Aug 13 00:48:58.351583 kernel: raid6: using algorithm avx2x4 gen() 30437 MB/s Aug 13 00:48:58.370595 kernel: raid6: .... xor() 4610 MB/s, rmw enabled Aug 13 00:48:58.370641 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:48:58.391199 kernel: xor: automatically using best checksumming function avx Aug 13 00:48:58.534205 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:48:58.541820 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:48:58.544273 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:48:58.567455 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 13 00:48:58.573037 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:48:58.576303 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:48:58.598761 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Aug 13 00:48:58.630520 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:48:58.633461 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:48:58.701461 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:48:58.705314 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:48:58.773199 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 00:48:58.949953 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:48:58.961676 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 00:48:58.961977 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:48:58.989239 kernel: libata version 3.00 loaded. Aug 13 00:48:58.999224 kernel: AES CTR mode by8 optimization enabled Aug 13 00:48:59.002582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:48:59.002707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:48:59.005284 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:48:59.011400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:48:59.025204 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 00:48:59.036147 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 00:48:59.036429 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 00:48:59.036599 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:48:59.036731 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 00:48:59.036861 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:48:59.036987 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:48:59.037125 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:48:59.037143 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 00:48:59.042607 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 00:48:59.045481 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:48:59.045615 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:48:59.045627 kernel: GPT:9289727 != 9297919 Aug 13 00:48:59.045636 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:48:59.045645 kernel: GPT:9289727 != 9297919 Aug 13 00:48:59.045654 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:48:59.045839 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:48:59.045849 kernel: scsi host1: ahci Aug 13 00:48:59.045986 kernel: scsi host2: ahci Aug 13 00:48:59.046011 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:48:59.049664 kernel: scsi host3: ahci Aug 13 00:48:59.074039 kernel: scsi host4: ahci Aug 13 00:48:59.073538 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:48:59.082212 kernel: scsi host5: ahci Aug 13 00:48:59.090237 kernel: scsi host6: ahci Aug 13 00:48:59.090422 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 00:48:59.090434 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 00:48:59.090444 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 00:48:59.090453 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 00:48:59.090462 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 00:48:59.090475 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 00:48:59.102753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 00:48:59.143760 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 00:48:59.170261 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:48:59.205360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:48:59.214037 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 00:48:59.214666 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 00:48:59.217877 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:48:59.232236 disk-uuid[629]: Primary Header is updated. Aug 13 00:48:59.232236 disk-uuid[629]: Secondary Entries is updated. Aug 13 00:48:59.232236 disk-uuid[629]: Secondary Header is updated. Aug 13 00:48:59.244201 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:48:59.260200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:48:59.399211 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 00:48:59.401291 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:48:59.401310 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:48:59.403708 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:48:59.404402 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:48:59.406387 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:48:59.425360 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:48:59.428021 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:48:59.429344 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:48:59.430004 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:48:59.432360 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:48:59.452341 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:49:00.263682 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:49:00.265456 disk-uuid[630]: The operation has completed successfully. Aug 13 00:49:00.312583 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:49:00.312694 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:49:00.341155 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:49:00.352998 sh[659]: Success Aug 13 00:49:00.370464 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:49:00.370495 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:49:00.372645 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:49:00.381196 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 00:49:00.431540 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:49:00.434472 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:49:00.444283 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:49:00.455890 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:49:00.455919 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (671) Aug 13 00:49:00.461266 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:49:00.461289 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:49:00.463979 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:49:00.473290 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:49:00.474430 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:49:00.475502 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:49:00.476423 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:49:00.479272 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:49:00.513215 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (704) Aug 13 00:49:00.516598 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:49:00.516628 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:49:00.518320 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:49:00.528558 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:49:00.528874 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:49:00.531310 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:49:00.607153 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:49:00.615278 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:49:00.641101 ignition[767]: Ignition 2.21.0 Aug 13 00:49:00.641119 ignition[767]: Stage: fetch-offline Aug 13 00:49:00.641147 ignition[767]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:49:00.643508 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:49:00.641156 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:49:00.641260 ignition[767]: parsed url from cmdline: "" Aug 13 00:49:00.641265 ignition[767]: no config URL provided Aug 13 00:49:00.641269 ignition[767]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:49:00.641278 ignition[767]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:49:00.641283 ignition[767]: failed to fetch config: resource requires networking Aug 13 00:49:00.641427 ignition[767]: Ignition finished successfully Aug 13 00:49:00.662076 systemd-networkd[842]: lo: Link UP Aug 13 00:49:00.662090 systemd-networkd[842]: lo: Gained carrier Aug 13 00:49:00.663502 systemd-networkd[842]: Enumeration completed Aug 13 00:49:00.664060 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:49:00.664064 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:49:00.664258 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:49:00.666366 systemd[1]: Reached target network.target - Network. Aug 13 00:49:00.666464 systemd-networkd[842]: eth0: Link UP Aug 13 00:49:00.666787 systemd-networkd[842]: eth0: Gained carrier Aug 13 00:49:00.666796 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:49:00.672322 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:49:00.714374 ignition[850]: Ignition 2.21.0 Aug 13 00:49:00.714389 ignition[850]: Stage: fetch Aug 13 00:49:00.714506 ignition[850]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:49:00.714516 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:49:00.714758 ignition[850]: parsed url from cmdline: "" Aug 13 00:49:00.714763 ignition[850]: no config URL provided Aug 13 00:49:00.714769 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:49:00.714779 ignition[850]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:49:00.714932 ignition[850]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 00:49:00.715315 ignition[850]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:49:00.915520 ignition[850]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 00:49:00.915693 ignition[850]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:49:01.180247 systemd-networkd[842]: eth0: DHCPv4 address 172.234.199.101/24, gateway 172.234.199.1 acquired from 23.194.118.65 Aug 13 00:49:01.315831 ignition[850]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 00:49:01.413995 ignition[850]: PUT result: OK Aug 13 00:49:01.414070 ignition[850]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 00:49:01.538511 ignition[850]: GET result: OK Aug 13 00:49:01.538637 ignition[850]: parsing config with SHA512: 9a5b80d0a3945d969d07061fed4d787304c6b0dc6e686f7a2cab52360e6181a0823a48fd977cc130fdb44ee99e1c77d553cc4d4eded2f888c41e84ce377bc1f8 Aug 13 00:49:01.542282 unknown[850]: fetched base config from "system" Aug 13 00:49:01.542291 unknown[850]: fetched base config from "system" Aug 13 00:49:01.542548 ignition[850]: fetch: fetch complete Aug 13 00:49:01.542297 unknown[850]: fetched user config from "akamai" Aug 13 00:49:01.542553 ignition[850]: fetch: fetch passed Aug 13 00:49:01.542588 ignition[850]: Ignition finished successfully Aug 13 00:49:01.545970 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:49:01.569710 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:49:01.598096 ignition[858]: Ignition 2.21.0 Aug 13 00:49:01.598107 ignition[858]: Stage: kargs Aug 13 00:49:01.598290 ignition[858]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:49:01.598301 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:49:01.599332 ignition[858]: kargs: kargs passed Aug 13 00:49:01.599375 ignition[858]: Ignition finished successfully Aug 13 00:49:01.602487 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:49:01.604591 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:49:01.628804 ignition[864]: Ignition 2.21.0 Aug 13 00:49:01.628817 ignition[864]: Stage: disks Aug 13 00:49:01.628932 ignition[864]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:49:01.628941 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:49:01.629587 ignition[864]: disks: disks passed Aug 13 00:49:01.630871 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:49:01.629618 ignition[864]: Ignition finished successfully Aug 13 00:49:01.632086 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:49:01.633038 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:49:01.634281 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:49:01.635455 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:49:01.636596 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:49:01.638911 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:49:01.676856 systemd-fsck[872]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 00:49:01.678682 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:49:01.681299 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:49:01.792207 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:49:01.792673 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:49:01.794005 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:49:01.795853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:49:01.798241 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:49:01.801129 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:49:01.801200 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:49:01.801225 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:49:01.807511 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:49:01.809997 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:49:01.818187 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (880) Aug 13 00:49:01.821424 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:49:01.821455 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:49:01.823301 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:49:01.828129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:49:01.866432 initrd-setup-root[904]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:49:01.872319 initrd-setup-root[911]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:49:01.876637 initrd-setup-root[918]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:49:01.880907 initrd-setup-root[925]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:49:01.974692 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:49:01.977235 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:49:01.979406 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:49:01.995265 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:49:01.998575 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:49:02.014140 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:49:02.022200 ignition[994]: INFO : Ignition 2.21.0 Aug 13 00:49:02.022200 ignition[994]: INFO : Stage: mount Aug 13 00:49:02.023613 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:49:02.023613 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:49:02.025371 ignition[994]: INFO : mount: mount passed Aug 13 00:49:02.025371 ignition[994]: INFO : Ignition finished successfully Aug 13 00:49:02.026961 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:49:02.028644 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:49:02.534329 systemd-networkd[842]: eth0: Gained IPv6LL Aug 13 00:49:02.794055 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:49:02.823203 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1004) Aug 13 00:49:02.823256 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:49:02.825640 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:49:02.827338 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:49:02.832272 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:49:02.858941 ignition[1020]: INFO : Ignition 2.21.0 Aug 13 00:49:02.858941 ignition[1020]: INFO : Stage: files Aug 13 00:49:02.860198 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:49:02.860198 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:49:02.860198 ignition[1020]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:49:02.862406 ignition[1020]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:49:02.862406 ignition[1020]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:49:02.862406 ignition[1020]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:49:02.864872 ignition[1020]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:49:02.864872 ignition[1020]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:49:02.864872 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:49:02.864872 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:49:02.862799 unknown[1020]: wrote ssh authorized keys file for user: core Aug 13 00:49:03.068648 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:49:04.168062 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:49:04.169727 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:49:04.169727 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:49:04.169727 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:49:04.169727 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:49:04.169727 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:49:04.169727 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:49:04.169727 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:49:04.169727 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:49:04.176325 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:49:04.176325 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:49:04.176325 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:49:04.176325 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:49:04.176325 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:49:04.176325 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:49:04.664949 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:49:04.995631 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:49:04.995631 ignition[1020]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:49:04.997845 ignition[1020]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:49:04.998933 ignition[1020]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:49:04.998933 ignition[1020]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:49:04.998933 ignition[1020]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 00:49:05.001340 ignition[1020]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:49:05.001340 ignition[1020]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:49:05.001340 ignition[1020]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 00:49:05.001340 ignition[1020]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:49:05.001340 ignition[1020]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:49:05.001340 ignition[1020]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:49:05.001340 ignition[1020]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:49:05.001340 ignition[1020]: INFO : files: files passed Aug 13 00:49:05.001340 ignition[1020]: INFO : Ignition finished successfully Aug 13 00:49:05.001617 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:49:05.006296 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:49:05.009373 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:49:05.016449 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:49:05.016551 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:49:05.024023 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:49:05.024023 initrd-setup-root-after-ignition[1051]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:49:05.026550 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:49:05.028984 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:49:05.029792 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:49:05.031595 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:49:05.088300 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:49:05.088433 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:49:05.089978 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:49:05.090821 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:49:05.092038 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:49:05.092757 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:49:05.124161 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:49:05.126603 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:49:05.143033 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:49:05.143694 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:49:05.144408 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:49:05.145591 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:49:05.145725 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:49:05.147147 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:49:05.147941 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:49:05.148954 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:49:05.150155 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:49:05.151285 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:49:05.152368 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:49:05.153591 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:49:05.154826 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:49:05.156103 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:49:05.157321 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:49:05.158506 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:49:05.159631 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:49:05.159724 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:49:05.161234 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:49:05.162025 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:49:05.163017 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:49:05.165281 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:49:05.166236 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:49:05.166368 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:49:05.167834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:49:05.167939 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:49:05.168720 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:49:05.168845 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:49:05.171255 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:49:05.174307 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:49:05.175389 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:49:05.175535 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:49:05.177021 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:49:05.177113 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:49:05.183861 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:49:05.183964 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:49:05.197020 ignition[1075]: INFO : Ignition 2.21.0 Aug 13 00:49:05.197020 ignition[1075]: INFO : Stage: umount Aug 13 00:49:05.199698 ignition[1075]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:49:05.199698 ignition[1075]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:49:05.199698 ignition[1075]: INFO : umount: umount passed Aug 13 00:49:05.199698 ignition[1075]: INFO : Ignition finished successfully Aug 13 00:49:05.200006 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:49:05.200134 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:49:05.203836 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:49:05.209396 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:49:05.209467 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:49:05.228043 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:49:05.228093 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:49:05.229149 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:49:05.229238 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:49:05.230217 systemd[1]: Stopped target network.target - Network. Aug 13 00:49:05.231199 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:49:05.231261 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:49:05.232321 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:49:05.233353 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:49:05.237243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:49:05.238022 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:49:05.239116 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:49:05.240360 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:49:05.240401 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:49:05.241662 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:49:05.241699 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:49:05.242717 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:49:05.242768 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:49:05.243795 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:49:05.243838 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:49:05.244979 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:49:05.246055 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:49:05.247493 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:49:05.247596 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:49:05.248846 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:49:05.248923 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:49:05.252030 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:49:05.252145 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:49:05.255480 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:49:05.256787 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:49:05.256843 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:49:05.260266 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:49:05.260486 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:49:05.260602 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:49:05.262949 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:49:05.263344 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:49:05.264493 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:49:05.264533 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:49:05.266413 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:49:05.268506 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:49:05.268561 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:49:05.269124 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:49:05.269182 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:49:05.271256 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:49:05.271306 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:49:05.272269 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:49:05.276830 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:49:05.289827 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:49:05.289955 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:49:05.291288 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:49:05.291459 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:49:05.292812 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:49:05.292868 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:49:05.294197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:49:05.294244 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:49:05.295404 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:49:05.295452 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:49:05.297070 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:49:05.297116 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:49:05.298365 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:49:05.298416 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:49:05.301275 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:49:05.302040 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:49:05.302090 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:49:05.304266 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:49:05.304316 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:49:05.305283 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:49:05.305328 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:49:05.307595 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:49:05.307641 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:49:05.308451 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:49:05.308495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:49:05.318009 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:49:05.318130 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:49:05.319665 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:49:05.321488 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:49:05.353288 systemd[1]: Switching root. Aug 13 00:49:05.385293 systemd-journald[206]: Journal stopped Aug 13 00:49:06.436082 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 00:49:06.436105 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:49:06.436116 kernel: SELinux: policy capability open_perms=1 Aug 13 00:49:06.436128 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:49:06.436136 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:49:06.436145 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:49:06.436154 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:49:06.436163 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:49:06.437202 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:49:06.437218 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:49:06.437231 kernel: audit: type=1403 audit(1755046145.524:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:49:06.437241 systemd[1]: Successfully loaded SELinux policy in 79.590ms. Aug 13 00:49:06.437252 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.773ms. Aug 13 00:49:06.437263 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:49:06.437273 systemd[1]: Detected virtualization kvm. Aug 13 00:49:06.437284 systemd[1]: Detected architecture x86-64. Aug 13 00:49:06.437293 systemd[1]: Detected first boot. Aug 13 00:49:06.437303 systemd[1]: Initializing machine ID from random generator. Aug 13 00:49:06.437312 zram_generator::config[1120]: No configuration found. Aug 13 00:49:06.437322 kernel: Guest personality initialized and is inactive Aug 13 00:49:06.437331 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:49:06.437340 kernel: Initialized host personality Aug 13 00:49:06.437350 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:49:06.437360 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:49:06.437370 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:49:06.437379 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:49:06.437389 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:49:06.437399 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:49:06.437409 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:49:06.437425 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:49:06.437437 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:49:06.437446 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:49:06.437456 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:49:06.437465 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:49:06.437475 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:49:06.437484 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:49:06.437495 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:49:06.437505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:49:06.437514 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:49:06.437524 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:49:06.437536 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:49:06.437546 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:49:06.437556 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:49:06.437566 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:49:06.437579 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:49:06.437589 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:49:06.437598 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:49:06.437608 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:49:06.437618 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:49:06.437627 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:49:06.437637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:49:06.437646 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:49:06.437658 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:49:06.437667 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:49:06.437677 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:49:06.437686 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:49:06.437696 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:49:06.437708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:49:06.437717 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:49:06.437727 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:49:06.437736 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:49:06.437747 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:49:06.437757 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:49:06.437767 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:06.437777 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:49:06.437788 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:49:06.437798 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:49:06.437808 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:49:06.437817 systemd[1]: Reached target machines.target - Containers. Aug 13 00:49:06.437828 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:49:06.437837 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:49:06.437847 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:49:06.437857 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:49:06.437869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:49:06.437878 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:49:06.437888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:49:06.437897 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:49:06.437907 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:49:06.437917 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:49:06.437927 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:49:06.437936 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:49:06.437946 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:49:06.437957 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:49:06.437967 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:49:06.437977 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:49:06.437987 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:49:06.437996 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:49:06.438006 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:49:06.438016 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:49:06.438025 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:49:06.438037 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:49:06.438046 systemd[1]: Stopped verity-setup.service. Aug 13 00:49:06.438056 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:06.438066 kernel: loop: module loaded Aug 13 00:49:06.438075 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:49:06.438084 kernel: fuse: init (API version 7.41) Aug 13 00:49:06.438094 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:49:06.438103 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:49:06.438114 kernel: ACPI: bus type drm_connector registered Aug 13 00:49:06.438124 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:49:06.438133 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:49:06.438143 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:49:06.438154 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:49:06.438163 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:49:06.439732 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:49:06.439747 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:49:06.439777 systemd-journald[1208]: Collecting audit messages is disabled. Aug 13 00:49:06.439799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:49:06.439810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:49:06.439820 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:49:06.439829 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:49:06.439841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:49:06.439850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:49:06.439860 systemd-journald[1208]: Journal started Aug 13 00:49:06.439879 systemd-journald[1208]: Runtime Journal (/run/log/journal/c91957277e804a4ca63d06db9e7a0808) is 8M, max 78.5M, 70.5M free. Aug 13 00:49:06.097095 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:49:06.112826 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:49:06.113419 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:49:06.443347 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:49:06.444117 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:49:06.444491 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:49:06.445375 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:49:06.445647 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:49:06.446565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:49:06.447473 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:49:06.448403 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:49:06.449614 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:49:06.463488 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:49:06.468249 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:49:06.469786 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:49:06.471242 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:49:06.471321 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:49:06.473573 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:49:06.480601 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:49:06.482327 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:49:06.485348 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:49:06.489412 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:49:06.490274 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:49:06.491646 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:49:06.492922 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:49:06.495472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:49:06.499790 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:49:06.502386 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:49:06.506556 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:49:06.507234 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:49:06.539245 systemd-journald[1208]: Time spent on flushing to /var/log/journal/c91957277e804a4ca63d06db9e7a0808 is 27.709ms for 997 entries. Aug 13 00:49:06.539245 systemd-journald[1208]: System Journal (/var/log/journal/c91957277e804a4ca63d06db9e7a0808) is 8M, max 195.6M, 187.6M free. Aug 13 00:49:06.582823 systemd-journald[1208]: Received client request to flush runtime journal. Aug 13 00:49:06.582880 kernel: loop0: detected capacity change from 0 to 8 Aug 13 00:49:06.582903 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:49:06.547938 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:49:06.550364 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:49:06.555635 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:49:06.559382 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:49:06.584796 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:49:06.597430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:49:06.605716 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Aug 13 00:49:06.606461 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 00:49:06.605733 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Aug 13 00:49:06.613755 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:49:06.617411 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:49:06.624313 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:49:06.650216 kernel: loop2: detected capacity change from 0 to 146240 Aug 13 00:49:06.674321 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:49:06.676897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:49:06.700240 kernel: loop3: detected capacity change from 0 to 113872 Aug 13 00:49:06.705524 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Aug 13 00:49:06.705837 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Aug 13 00:49:06.712363 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:49:06.733231 kernel: loop4: detected capacity change from 0 to 8 Aug 13 00:49:06.737297 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 00:49:06.758201 kernel: loop6: detected capacity change from 0 to 146240 Aug 13 00:49:06.781204 kernel: loop7: detected capacity change from 0 to 113872 Aug 13 00:49:06.796141 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 00:49:06.797013 (sd-merge)[1272]: Merged extensions into '/usr'. Aug 13 00:49:06.805538 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:49:06.805655 systemd[1]: Reloading... Aug 13 00:49:06.885299 zram_generator::config[1297]: No configuration found. Aug 13 00:49:07.012677 ldconfig[1240]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:49:07.033996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:49:07.103440 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:49:07.104003 systemd[1]: Reloading finished in 297 ms. Aug 13 00:49:07.119907 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:49:07.121022 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:49:07.132300 systemd[1]: Starting ensure-sysext.service... Aug 13 00:49:07.135285 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:49:07.144137 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:49:07.148020 systemd[1]: Reload requested from client PID 1341 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:49:07.148097 systemd[1]: Reloading... Aug 13 00:49:07.160783 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:49:07.161069 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:49:07.161466 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:49:07.162381 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:49:07.163376 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:49:07.163674 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Aug 13 00:49:07.163786 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Aug 13 00:49:07.167625 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:49:07.167690 systemd-tmpfiles[1342]: Skipping /boot Aug 13 00:49:07.181557 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:49:07.182273 systemd-tmpfiles[1342]: Skipping /boot Aug 13 00:49:07.211217 zram_generator::config[1369]: No configuration found. Aug 13 00:49:07.309474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:49:07.378140 systemd[1]: Reloading finished in 229 ms. Aug 13 00:49:07.406773 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:49:07.414274 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:49:07.417352 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:49:07.420110 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:49:07.426529 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:49:07.431443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:49:07.434641 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:49:07.438896 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:07.439046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:49:07.440842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:49:07.452339 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:49:07.453905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:49:07.454636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:49:07.455279 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:49:07.455366 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:07.459338 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:49:07.464950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:49:07.465241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:49:07.466127 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:07.466567 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:49:07.466708 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:49:07.466779 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:49:07.466853 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:07.474247 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:07.474478 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:49:07.482233 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:49:07.490546 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:49:07.491301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:49:07.491439 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:49:07.491600 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:07.495239 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:49:07.506295 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:49:07.508918 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:49:07.510682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:49:07.510875 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:49:07.515979 systemd-udevd[1418]: Using default interface naming scheme 'v255'. Aug 13 00:49:07.520561 systemd[1]: Finished ensure-sysext.service. Aug 13 00:49:07.525452 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:49:07.532927 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:49:07.533793 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:49:07.539815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:49:07.540907 augenrules[1452]: No rules Aug 13 00:49:07.541493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:49:07.542120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:49:07.543712 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:49:07.543963 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:49:07.548881 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:49:07.549657 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:49:07.554822 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:49:07.556523 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:49:07.574608 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:49:07.575554 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:49:07.577831 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:49:07.579507 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:49:07.584032 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:49:07.671081 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:49:07.784220 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:49:07.808298 systemd-networkd[1474]: lo: Link UP Aug 13 00:49:07.808312 systemd-networkd[1474]: lo: Gained carrier Aug 13 00:49:07.811834 systemd-networkd[1474]: Enumeration completed Aug 13 00:49:07.812762 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:49:07.815093 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:49:07.815108 systemd-networkd[1474]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:49:07.815658 systemd-networkd[1474]: eth0: Link UP Aug 13 00:49:07.815840 systemd-networkd[1474]: eth0: Gained carrier Aug 13 00:49:07.815862 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:49:07.816155 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:49:07.819315 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:49:07.819963 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:49:07.821268 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:49:07.826468 systemd-resolved[1416]: Positive Trust Anchors: Aug 13 00:49:07.826823 systemd-resolved[1416]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:49:07.826947 systemd-resolved[1416]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:49:07.831606 systemd-resolved[1416]: Defaulting to hostname 'linux'. Aug 13 00:49:07.834469 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:49:07.835097 systemd[1]: Reached target network.target - Network. Aug 13 00:49:07.835605 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:49:07.836141 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:49:07.838587 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:49:07.839198 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:49:07.840232 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:49:07.841296 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:49:07.842414 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:49:07.843233 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:49:07.844242 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:49:07.844274 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:49:07.845234 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:49:07.846905 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:49:07.849904 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:49:07.853565 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:49:07.855242 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:49:07.857227 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:49:07.863230 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 00:49:07.864009 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:49:07.865702 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:49:07.868523 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:49:07.871936 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:49:07.879221 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:49:07.906213 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:49:07.910130 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:49:07.911349 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:49:07.911382 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:49:07.913887 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:49:07.918755 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:49:07.926243 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:49:07.926477 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:49:07.922371 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:49:07.927674 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:49:07.934247 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:49:07.940049 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:49:07.940672 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:49:07.945279 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:49:07.948741 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:49:07.954521 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:49:07.961344 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:49:07.969433 jq[1519]: false Aug 13 00:49:07.970120 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:49:07.983945 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:49:07.986100 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:49:07.988545 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:49:07.993533 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:49:08.001250 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:49:08.006047 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:49:08.007490 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:49:08.008456 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:49:08.012999 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:49:08.015678 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:49:08.019030 oslogin_cache_refresh[1521]: Refreshing passwd entry cache Aug 13 00:49:08.020481 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing passwd entry cache Aug 13 00:49:08.025525 extend-filesystems[1520]: Found /dev/sda6 Aug 13 00:49:08.028980 extend-filesystems[1520]: Found /dev/sda9 Aug 13 00:49:08.031259 extend-filesystems[1520]: Checking size of /dev/sda9 Aug 13 00:49:08.031908 oslogin_cache_refresh[1521]: Failure getting users, quitting Aug 13 00:49:08.035678 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting users, quitting Aug 13 00:49:08.035678 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:49:08.035678 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing group entry cache Aug 13 00:49:08.035678 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting groups, quitting Aug 13 00:49:08.035678 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:49:08.031923 oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:49:08.031959 oslogin_cache_refresh[1521]: Refreshing group entry cache Aug 13 00:49:08.032446 oslogin_cache_refresh[1521]: Failure getting groups, quitting Aug 13 00:49:08.032455 oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:49:08.043187 extend-filesystems[1520]: Resized partition /dev/sda9 Aug 13 00:49:08.043789 extend-filesystems[1555]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:49:08.049581 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 00:49:08.049603 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 00:49:08.049627 extend-filesystems[1555]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:49:08.049627 extend-filesystems[1555]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:49:08.049627 extend-filesystems[1555]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 00:49:08.057947 update_engine[1533]: I20250813 00:49:08.057623 1533 main.cc:92] Flatcar Update Engine starting Aug 13 00:49:08.058124 extend-filesystems[1520]: Resized filesystem in /dev/sda9 Aug 13 00:49:08.091414 coreos-metadata[1516]: Aug 13 00:49:08.091 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:49:08.103355 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:49:08.104246 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:49:08.106625 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:49:08.106892 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:49:08.112363 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:49:08.112632 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:49:08.126219 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:49:08.135728 jq[1534]: true Aug 13 00:49:08.136085 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:49:08.160807 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:49:08.169356 jq[1577]: true Aug 13 00:49:08.177943 dbus-daemon[1517]: [system] SELinux support is enabled Aug 13 00:49:08.178085 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:49:08.178733 tar[1567]: linux-amd64/helm Aug 13 00:49:08.182303 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:49:08.182338 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:49:08.184254 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:49:08.184280 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:49:08.202665 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:49:08.203436 update_engine[1533]: I20250813 00:49:08.203382 1533 update_check_scheduler.cc:74] Next update check in 11m59s Aug 13 00:49:08.227868 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:49:08.246607 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:49:08.291420 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:49:08.306303 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:49:08.309388 systemd-logind[1528]: New seat seat0. Aug 13 00:49:08.318390 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:49:08.352205 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:49:08.359429 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:49:08.361057 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:49:08.368342 systemd[1]: Starting sshkeys.service... Aug 13 00:49:08.391242 systemd-networkd[1474]: eth0: DHCPv4 address 172.234.199.101/24, gateway 172.234.199.1 acquired from 23.194.118.65 Aug 13 00:49:08.392572 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. Aug 13 00:49:08.393419 dbus-daemon[1517]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1474 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:49:08.394917 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:49:08.398777 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:49:08.403517 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:49:08.482359 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:49:08.490900 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:49:08.491201 coreos-metadata[1611]: Aug 13 00:49:08.491 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:49:09.414117 systemd-timesyncd[1450]: Contacted time server 45.79.82.45:123 (0.flatcar.pool.ntp.org). Aug 13 00:49:09.414406 systemd-timesyncd[1450]: Initial clock synchronization to Wed 2025-08-13 00:49:09.413512 UTC. Aug 13 00:49:09.415325 systemd-resolved[1416]: Clock change detected. Flushing caches. Aug 13 00:49:09.422646 coreos-metadata[1611]: Aug 13 00:49:09.422 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 00:49:09.427818 containerd[1575]: time="2025-08-13T00:49:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:49:09.434051 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:49:09.434740 containerd[1575]: time="2025-08-13T00:49:09.434718542Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:49:09.445782 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:49:09.446182 dbus-daemon[1517]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1612 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463187142Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.47µs" Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463209792Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463225842Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463365342Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463379252Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463399022Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463456892Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463467362Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463699852Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463713572Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463724932Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:49:09.464121 containerd[1575]: time="2025-08-13T00:49:09.463732272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:49:09.464346 containerd[1575]: time="2025-08-13T00:49:09.463821612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:49:09.464346 containerd[1575]: time="2025-08-13T00:49:09.464015292Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:49:09.464346 containerd[1575]: time="2025-08-13T00:49:09.464042152Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:49:09.464346 containerd[1575]: time="2025-08-13T00:49:09.464050772Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:49:09.464946 containerd[1575]: time="2025-08-13T00:49:09.464930132Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:49:09.465838 containerd[1575]: time="2025-08-13T00:49:09.465806172Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:49:09.466047 containerd[1575]: time="2025-08-13T00:49:09.466018112Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470216282Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470270992Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470283912Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470293332Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470302992Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470311222Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470326222Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470335722Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470347132Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470358992Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470366242Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470375582Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470479102Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:49:09.470761 containerd[1575]: time="2025-08-13T00:49:09.470495952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470507752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470558802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470569732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470582822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470597582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470606862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470616352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470624512Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470632522Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470691412Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470703492Z" level=info msg="Start snapshots syncer" Aug 13 00:49:09.471007 containerd[1575]: time="2025-08-13T00:49:09.470729752Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:49:09.474714 containerd[1575]: time="2025-08-13T00:49:09.474680642Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476579122Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476669252Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476773962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476804652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476818232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476830912Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476843022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476854452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476865862Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476887802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476897362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:49:09.477057 containerd[1575]: time="2025-08-13T00:49:09.476912482Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479061192Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479087862Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479097302Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479106412Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479113492Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479122532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479132002Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479143102Z" level=info msg="runtime interface created" Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479147772Z" level=info msg="created NRI interface" Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479154252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479164532Z" level=info msg="Connect containerd service" Aug 13 00:49:09.479437 containerd[1575]: time="2025-08-13T00:49:09.479188772Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:49:09.482840 containerd[1575]: time="2025-08-13T00:49:09.482338672Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:49:09.483803 locksmithd[1581]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:49:09.491932 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:49:09.503835 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:49:09.566561 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:49:09.568961 coreos-metadata[1611]: Aug 13 00:49:09.568 INFO Fetch successful Aug 13 00:49:09.676204 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:49:09.677228 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:49:09.681770 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:49:09.689207 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:49:09.695893 update-ssh-keys[1648]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:49:09.696312 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:49:09.700271 systemd[1]: Finished sshkeys.service. Aug 13 00:49:09.702821 containerd[1575]: time="2025-08-13T00:49:09.702791322Z" level=info msg="Start subscribing containerd event" Aug 13 00:49:09.703249 containerd[1575]: time="2025-08-13T00:49:09.702955332Z" level=info msg="Start recovering state" Aug 13 00:49:09.705382 containerd[1575]: time="2025-08-13T00:49:09.705363652Z" level=info msg="Start event monitor" Aug 13 00:49:09.705531 containerd[1575]: time="2025-08-13T00:49:09.705489612Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:49:09.705695 containerd[1575]: time="2025-08-13T00:49:09.705680832Z" level=info msg="Start streaming server" Aug 13 00:49:09.705812 containerd[1575]: time="2025-08-13T00:49:09.705799382Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:49:09.707733 containerd[1575]: time="2025-08-13T00:49:09.706194792Z" level=info msg="runtime interface starting up..." Aug 13 00:49:09.707733 containerd[1575]: time="2025-08-13T00:49:09.706208332Z" level=info msg="starting plugins..." Aug 13 00:49:09.707733 containerd[1575]: time="2025-08-13T00:49:09.706226022Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:49:09.707733 containerd[1575]: time="2025-08-13T00:49:09.705622352Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:49:09.707733 containerd[1575]: time="2025-08-13T00:49:09.706376222Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:49:09.707733 containerd[1575]: time="2025-08-13T00:49:09.706424822Z" level=info msg="containerd successfully booted in 0.279405s" Aug 13 00:49:09.706604 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:49:09.727000 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:49:09.731444 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:49:09.734603 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:49:09.736705 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:49:09.766817 polkitd[1645]: Started polkitd version 126 Aug 13 00:49:09.771178 polkitd[1645]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:49:09.771440 polkitd[1645]: Loading rules from directory /run/polkit-1/rules.d Aug 13 00:49:09.771483 polkitd[1645]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 00:49:09.771730 polkitd[1645]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 00:49:09.771758 polkitd[1645]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 00:49:09.771793 polkitd[1645]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:49:09.772214 polkitd[1645]: Finished loading, compiling and executing 2 rules Aug 13 00:49:09.772459 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:49:09.773384 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:49:09.774219 polkitd[1645]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:49:09.785178 systemd-hostnamed[1612]: Hostname set to <172-234-199-101> (transient) Aug 13 00:49:09.785187 systemd-resolved[1416]: System hostname changed to '172-234-199-101'. Aug 13 00:49:09.922842 coreos-metadata[1516]: Aug 13 00:49:09.922 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 00:49:09.932677 tar[1567]: linux-amd64/LICENSE Aug 13 00:49:09.932826 tar[1567]: linux-amd64/README.md Aug 13 00:49:09.957938 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:49:10.029885 coreos-metadata[1516]: Aug 13 00:49:10.029 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 00:49:10.232649 coreos-metadata[1516]: Aug 13 00:49:10.232 INFO Fetch successful Aug 13 00:49:10.232830 coreos-metadata[1516]: Aug 13 00:49:10.232 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 00:49:10.268778 systemd-networkd[1474]: eth0: Gained IPv6LL Aug 13 00:49:10.271232 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:49:10.272284 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:49:10.275535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:49:10.278683 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:49:10.298089 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:49:10.515658 coreos-metadata[1516]: Aug 13 00:49:10.515 INFO Fetch successful Aug 13 00:49:10.607468 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:49:10.609036 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:49:11.137897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:49:11.138878 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:49:11.141634 systemd[1]: Startup finished in 2.953s (kernel) + 7.865s (initrd) + 4.851s (userspace) = 15.670s. Aug 13 00:49:11.183892 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:49:11.654151 kubelet[1713]: E0813 00:49:11.654081 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:49:11.656978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:49:11.657169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:49:11.657725 systemd[1]: kubelet.service: Consumed 837ms CPU time, 264.1M memory peak. Aug 13 00:49:12.479439 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:49:12.480640 systemd[1]: Started sshd@0-172.234.199.101:22-147.75.109.163:49182.service - OpenSSH per-connection server daemon (147.75.109.163:49182). Aug 13 00:49:12.826079 sshd[1726]: Accepted publickey for core from 147.75.109.163 port 49182 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:12.827486 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:12.833698 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:49:12.835069 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:49:12.843253 systemd-logind[1528]: New session 1 of user core. Aug 13 00:49:12.855074 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:49:12.858480 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:49:12.875680 (systemd)[1730]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:49:12.878076 systemd-logind[1528]: New session c1 of user core. Aug 13 00:49:13.011532 systemd[1730]: Queued start job for default target default.target. Aug 13 00:49:13.023649 systemd[1730]: Created slice app.slice - User Application Slice. Aug 13 00:49:13.023677 systemd[1730]: Reached target paths.target - Paths. Aug 13 00:49:13.023718 systemd[1730]: Reached target timers.target - Timers. Aug 13 00:49:13.025048 systemd[1730]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:49:13.035467 systemd[1730]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:49:13.035545 systemd[1730]: Reached target sockets.target - Sockets. Aug 13 00:49:13.035586 systemd[1730]: Reached target basic.target - Basic System. Aug 13 00:49:13.035627 systemd[1730]: Reached target default.target - Main User Target. Aug 13 00:49:13.035660 systemd[1730]: Startup finished in 151ms. Aug 13 00:49:13.035807 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:49:13.038051 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:49:13.297539 systemd[1]: Started sshd@1-172.234.199.101:22-147.75.109.163:49188.service - OpenSSH per-connection server daemon (147.75.109.163:49188). Aug 13 00:49:13.650029 sshd[1741]: Accepted publickey for core from 147.75.109.163 port 49188 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:13.651789 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:13.656583 systemd-logind[1528]: New session 2 of user core. Aug 13 00:49:13.665637 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:49:13.902805 sshd[1743]: Connection closed by 147.75.109.163 port 49188 Aug 13 00:49:13.903339 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:13.907063 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:49:13.907768 systemd[1]: sshd@1-172.234.199.101:22-147.75.109.163:49188.service: Deactivated successfully. Aug 13 00:49:13.909697 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:49:13.911422 systemd-logind[1528]: Removed session 2. Aug 13 00:49:13.967663 systemd[1]: Started sshd@2-172.234.199.101:22-147.75.109.163:49204.service - OpenSSH per-connection server daemon (147.75.109.163:49204). Aug 13 00:49:14.299130 sshd[1749]: Accepted publickey for core from 147.75.109.163 port 49204 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:14.300260 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:14.304744 systemd-logind[1528]: New session 3 of user core. Aug 13 00:49:14.310635 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:49:14.539632 sshd[1751]: Connection closed by 147.75.109.163 port 49204 Aug 13 00:49:14.540282 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:14.543919 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:49:14.544975 systemd[1]: sshd@2-172.234.199.101:22-147.75.109.163:49204.service: Deactivated successfully. Aug 13 00:49:14.547028 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:49:14.548297 systemd-logind[1528]: Removed session 3. Aug 13 00:49:14.601661 systemd[1]: Started sshd@3-172.234.199.101:22-147.75.109.163:49214.service - OpenSSH per-connection server daemon (147.75.109.163:49214). Aug 13 00:49:14.954164 sshd[1757]: Accepted publickey for core from 147.75.109.163 port 49214 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:14.956185 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:14.961630 systemd-logind[1528]: New session 4 of user core. Aug 13 00:49:14.968638 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:49:15.202211 sshd[1759]: Connection closed by 147.75.109.163 port 49214 Aug 13 00:49:15.202913 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:15.205929 systemd[1]: sshd@3-172.234.199.101:22-147.75.109.163:49214.service: Deactivated successfully. Aug 13 00:49:15.208016 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:49:15.210477 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:49:15.211584 systemd-logind[1528]: Removed session 4. Aug 13 00:49:15.263748 systemd[1]: Started sshd@4-172.234.199.101:22-147.75.109.163:49228.service - OpenSSH per-connection server daemon (147.75.109.163:49228). Aug 13 00:49:15.610943 sshd[1765]: Accepted publickey for core from 147.75.109.163 port 49228 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:15.612282 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:15.617192 systemd-logind[1528]: New session 5 of user core. Aug 13 00:49:15.623666 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:49:15.818071 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:49:15.818378 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:49:15.838326 sudo[1768]: pam_unix(sudo:session): session closed for user root Aug 13 00:49:15.890408 sshd[1767]: Connection closed by 147.75.109.163 port 49228 Aug 13 00:49:15.891256 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:15.895427 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:49:15.895615 systemd[1]: sshd@4-172.234.199.101:22-147.75.109.163:49228.service: Deactivated successfully. Aug 13 00:49:15.897237 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:49:15.899126 systemd-logind[1528]: Removed session 5. Aug 13 00:49:15.955856 systemd[1]: Started sshd@5-172.234.199.101:22-147.75.109.163:49230.service - OpenSSH per-connection server daemon (147.75.109.163:49230). Aug 13 00:49:16.307459 sshd[1774]: Accepted publickey for core from 147.75.109.163 port 49230 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:16.309126 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:16.314600 systemd-logind[1528]: New session 6 of user core. Aug 13 00:49:16.321833 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:49:16.507398 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:49:16.507728 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:49:16.512473 sudo[1778]: pam_unix(sudo:session): session closed for user root Aug 13 00:49:16.518108 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:49:16.518408 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:49:16.528087 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:49:16.564514 augenrules[1800]: No rules Aug 13 00:49:16.565716 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:49:16.565973 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:49:16.566797 sudo[1777]: pam_unix(sudo:session): session closed for user root Aug 13 00:49:16.618326 sshd[1776]: Connection closed by 147.75.109.163 port 49230 Aug 13 00:49:16.618944 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:16.622718 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:49:16.623247 systemd[1]: sshd@5-172.234.199.101:22-147.75.109.163:49230.service: Deactivated successfully. Aug 13 00:49:16.625196 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:49:16.627183 systemd-logind[1528]: Removed session 6. Aug 13 00:49:16.679566 systemd[1]: Started sshd@6-172.234.199.101:22-147.75.109.163:49236.service - OpenSSH per-connection server daemon (147.75.109.163:49236). Aug 13 00:49:17.026878 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 49236 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:17.028629 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:17.034212 systemd-logind[1528]: New session 7 of user core. Aug 13 00:49:17.036664 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:49:17.226578 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:49:17.226874 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:49:17.505545 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:49:17.515813 (dockerd)[1830]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:49:17.710134 dockerd[1830]: time="2025-08-13T00:49:17.710072272Z" level=info msg="Starting up" Aug 13 00:49:17.711343 dockerd[1830]: time="2025-08-13T00:49:17.711321632Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:49:17.736061 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport940962185-merged.mount: Deactivated successfully. Aug 13 00:49:17.744493 systemd[1]: var-lib-docker-metacopy\x2dcheck2202068339-merged.mount: Deactivated successfully. Aug 13 00:49:17.765502 dockerd[1830]: time="2025-08-13T00:49:17.765356792Z" level=info msg="Loading containers: start." Aug 13 00:49:17.774548 kernel: Initializing XFRM netlink socket Aug 13 00:49:18.009792 systemd-networkd[1474]: docker0: Link UP Aug 13 00:49:18.012396 dockerd[1830]: time="2025-08-13T00:49:18.012351352Z" level=info msg="Loading containers: done." Aug 13 00:49:18.026707 dockerd[1830]: time="2025-08-13T00:49:18.026626982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:49:18.026707 dockerd[1830]: time="2025-08-13T00:49:18.026679682Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:49:18.026836 dockerd[1830]: time="2025-08-13T00:49:18.026778982Z" level=info msg="Initializing buildkit" Aug 13 00:49:18.045444 dockerd[1830]: time="2025-08-13T00:49:18.045407872Z" level=info msg="Completed buildkit initialization" Aug 13 00:49:18.052535 dockerd[1830]: time="2025-08-13T00:49:18.052450372Z" level=info msg="Daemon has completed initialization" Aug 13 00:49:18.052612 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:49:18.052808 dockerd[1830]: time="2025-08-13T00:49:18.052603372Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:49:18.887855 containerd[1575]: time="2025-08-13T00:49:18.887796722Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:49:19.770033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962252897.mount: Deactivated successfully. Aug 13 00:49:20.876397 containerd[1575]: time="2025-08-13T00:49:20.876039552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:20.877089 containerd[1575]: time="2025-08-13T00:49:20.876319302Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 00:49:20.881377 containerd[1575]: time="2025-08-13T00:49:20.880158192Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:20.882060 containerd[1575]: time="2025-08-13T00:49:20.881998582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:20.883360 containerd[1575]: time="2025-08-13T00:49:20.883327862Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.99547023s" Aug 13 00:49:20.883462 containerd[1575]: time="2025-08-13T00:49:20.883442622Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:49:20.884428 containerd[1575]: time="2025-08-13T00:49:20.884374462Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:49:21.861087 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:49:21.864675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:49:22.051232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:49:22.063842 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:49:22.110335 kubelet[2098]: E0813 00:49:22.110131 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:49:22.117937 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:49:22.118124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:49:22.118507 systemd[1]: kubelet.service: Consumed 194ms CPU time, 110.9M memory peak. Aug 13 00:49:22.397238 containerd[1575]: time="2025-08-13T00:49:22.397127582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:22.398499 containerd[1575]: time="2025-08-13T00:49:22.398234932Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 00:49:22.399045 containerd[1575]: time="2025-08-13T00:49:22.399012182Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:22.401003 containerd[1575]: time="2025-08-13T00:49:22.400973332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:22.401833 containerd[1575]: time="2025-08-13T00:49:22.401799802Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.51723846s" Aug 13 00:49:22.401878 containerd[1575]: time="2025-08-13T00:49:22.401834582Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:49:22.402300 containerd[1575]: time="2025-08-13T00:49:22.402262512Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:49:23.580321 containerd[1575]: time="2025-08-13T00:49:23.580244612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:23.581366 containerd[1575]: time="2025-08-13T00:49:23.581150622Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 00:49:23.581918 containerd[1575]: time="2025-08-13T00:49:23.581893612Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:23.586654 containerd[1575]: time="2025-08-13T00:49:23.586619542Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.18432631s" Aug 13 00:49:23.586697 containerd[1575]: time="2025-08-13T00:49:23.586653402Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:49:23.586855 containerd[1575]: time="2025-08-13T00:49:23.586824662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:23.587923 containerd[1575]: time="2025-08-13T00:49:23.587879172Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:49:24.817535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142293056.mount: Deactivated successfully. Aug 13 00:49:25.522414 containerd[1575]: time="2025-08-13T00:49:25.522331572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:25.525542 containerd[1575]: time="2025-08-13T00:49:25.524844252Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 00:49:25.525542 containerd[1575]: time="2025-08-13T00:49:25.524924112Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:25.528377 containerd[1575]: time="2025-08-13T00:49:25.528339962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:25.528766 containerd[1575]: time="2025-08-13T00:49:25.528733082Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.94060395s" Aug 13 00:49:25.528801 containerd[1575]: time="2025-08-13T00:49:25.528764782Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:49:25.529483 containerd[1575]: time="2025-08-13T00:49:25.529458142Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:49:26.319232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673683525.mount: Deactivated successfully. Aug 13 00:49:26.958811 containerd[1575]: time="2025-08-13T00:49:26.957843802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:26.958811 containerd[1575]: time="2025-08-13T00:49:26.958728952Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:49:26.959321 containerd[1575]: time="2025-08-13T00:49:26.959271772Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:26.961002 containerd[1575]: time="2025-08-13T00:49:26.960975882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:26.961901 containerd[1575]: time="2025-08-13T00:49:26.961823742Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.43234129s" Aug 13 00:49:26.961901 containerd[1575]: time="2025-08-13T00:49:26.961899802Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:49:26.962674 containerd[1575]: time="2025-08-13T00:49:26.962632552Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:49:27.659214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089239862.mount: Deactivated successfully. Aug 13 00:49:27.663452 containerd[1575]: time="2025-08-13T00:49:27.663385012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:49:27.664141 containerd[1575]: time="2025-08-13T00:49:27.664094652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:49:27.664768 containerd[1575]: time="2025-08-13T00:49:27.664711842Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:49:27.666090 containerd[1575]: time="2025-08-13T00:49:27.666045782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:49:27.667039 containerd[1575]: time="2025-08-13T00:49:27.666672402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 703.92606ms" Aug 13 00:49:27.667039 containerd[1575]: time="2025-08-13T00:49:27.666708882Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:49:27.667543 containerd[1575]: time="2025-08-13T00:49:27.667472662Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:49:28.411797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061001561.mount: Deactivated successfully. Aug 13 00:49:29.922679 containerd[1575]: time="2025-08-13T00:49:29.921444362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:29.922679 containerd[1575]: time="2025-08-13T00:49:29.923063572Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 00:49:29.926319 containerd[1575]: time="2025-08-13T00:49:29.923577282Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:29.926780 containerd[1575]: time="2025-08-13T00:49:29.926697842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:29.927062 containerd[1575]: time="2025-08-13T00:49:29.927019682Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.25941149s" Aug 13 00:49:29.927103 containerd[1575]: time="2025-08-13T00:49:29.927060082Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:49:31.620596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:49:31.620757 systemd[1]: kubelet.service: Consumed 194ms CPU time, 110.9M memory peak. Aug 13 00:49:31.623256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:49:31.651948 systemd[1]: Reload requested from client PID 2254 ('systemctl') (unit session-7.scope)... Aug 13 00:49:31.651965 systemd[1]: Reloading... Aug 13 00:49:31.813811 zram_generator::config[2298]: No configuration found. Aug 13 00:49:31.913771 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:49:32.033452 systemd[1]: Reloading finished in 381 ms. Aug 13 00:49:32.099128 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:49:32.099224 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:49:32.099703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:49:32.099742 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98.3M memory peak. Aug 13 00:49:32.101362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:49:32.269876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:49:32.273310 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:49:32.313506 kubelet[2352]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:49:32.313506 kubelet[2352]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:49:32.313506 kubelet[2352]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:49:32.313884 kubelet[2352]: I0813 00:49:32.313582 2352 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:49:32.834551 kubelet[2352]: I0813 00:49:32.834495 2352 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:49:32.834551 kubelet[2352]: I0813 00:49:32.834535 2352 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:49:32.834786 kubelet[2352]: I0813 00:49:32.834754 2352 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:49:32.863152 kubelet[2352]: E0813 00:49:32.863131 2352 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.199.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.199.101:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:49:32.863391 kubelet[2352]: I0813 00:49:32.863200 2352 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:49:32.875388 kubelet[2352]: I0813 00:49:32.875340 2352 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:49:32.883013 kubelet[2352]: I0813 00:49:32.882991 2352 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:49:32.883364 kubelet[2352]: I0813 00:49:32.883088 2352 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:49:32.883364 kubelet[2352]: I0813 00:49:32.883201 2352 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:49:32.883364 kubelet[2352]: I0813 00:49:32.883224 2352 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-199-101","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:49:32.883791 kubelet[2352]: I0813 00:49:32.883371 2352 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:49:32.883791 kubelet[2352]: I0813 00:49:32.883379 2352 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:49:32.883791 kubelet[2352]: I0813 00:49:32.883715 2352 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:49:32.886486 kubelet[2352]: I0813 00:49:32.886165 2352 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:49:32.886486 kubelet[2352]: I0813 00:49:32.886182 2352 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:49:32.886486 kubelet[2352]: I0813 00:49:32.886211 2352 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:49:32.886486 kubelet[2352]: I0813 00:49:32.886229 2352 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:49:32.888855 kubelet[2352]: W0813 00:49:32.888822 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.199.101:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-199-101&limit=500&resourceVersion=0": dial tcp 172.234.199.101:6443: connect: connection refused Aug 13 00:49:32.889100 kubelet[2352]: E0813 00:49:32.889082 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.199.101:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-199-101&limit=500&resourceVersion=0\": dial tcp 172.234.199.101:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:49:32.889224 kubelet[2352]: I0813 00:49:32.889212 2352 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:49:32.889584 kubelet[2352]: I0813 00:49:32.889571 2352 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:49:32.889681 kubelet[2352]: W0813 00:49:32.889671 2352 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:49:32.891312 kubelet[2352]: I0813 00:49:32.891299 2352 server.go:1274] "Started kubelet" Aug 13 00:49:32.893228 kubelet[2352]: W0813 00:49:32.892739 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.199.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.199.101:6443: connect: connection refused Aug 13 00:49:32.893228 kubelet[2352]: E0813 00:49:32.892770 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.199.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.199.101:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:49:32.893228 kubelet[2352]: I0813 00:49:32.892831 2352 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:49:32.894133 kubelet[2352]: I0813 00:49:32.893556 2352 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:49:32.897645 kubelet[2352]: I0813 00:49:32.897495 2352 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:49:32.897833 kubelet[2352]: I0813 00:49:32.897810 2352 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:49:32.898042 kubelet[2352]: I0813 00:49:32.898029 2352 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:49:32.900825 kubelet[2352]: E0813 00:49:32.898208 2352 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.199.101:6443/api/v1/namespaces/default/events\": dial tcp 172.234.199.101:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-199-101.185b2d321397f926 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-199-101,UID:172-234-199-101,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-199-101,},FirstTimestamp:2025-08-13 00:49:32.891281702 +0000 UTC m=+0.609340781,LastTimestamp:2025-08-13 00:49:32.891281702 +0000 UTC m=+0.609340781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-199-101,}" Aug 13 00:49:32.900825 kubelet[2352]: I0813 00:49:32.899819 2352 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:49:32.903037 kubelet[2352]: E0813 00:49:32.903014 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-199-101\" not found" Aug 13 00:49:32.903075 kubelet[2352]: I0813 00:49:32.903045 2352 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:49:32.903578 kubelet[2352]: I0813 00:49:32.903172 2352 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:49:32.903578 kubelet[2352]: I0813 00:49:32.903211 2352 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:49:32.903578 kubelet[2352]: W0813 00:49:32.903410 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.199.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.199.101:6443: connect: connection refused Aug 13 00:49:32.903578 kubelet[2352]: E0813 00:49:32.903436 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.199.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.199.101:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:49:32.904004 kubelet[2352]: E0813 00:49:32.903971 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.199.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-199-101?timeout=10s\": dial tcp 172.234.199.101:6443: connect: connection refused" interval="200ms" Aug 13 00:49:32.904230 kubelet[2352]: I0813 00:49:32.904211 2352 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:49:32.904296 kubelet[2352]: I0813 00:49:32.904274 2352 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:49:32.905747 kubelet[2352]: I0813 00:49:32.905727 2352 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:49:32.920035 kubelet[2352]: I0813 00:49:32.920008 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:49:32.926157 kubelet[2352]: I0813 00:49:32.926143 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:49:32.926240 kubelet[2352]: I0813 00:49:32.926231 2352 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:49:32.926293 kubelet[2352]: I0813 00:49:32.926286 2352 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:49:32.926371 kubelet[2352]: E0813 00:49:32.926356 2352 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:49:32.926892 kubelet[2352]: I0813 00:49:32.926874 2352 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:49:32.926892 kubelet[2352]: I0813 00:49:32.926887 2352 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:49:32.926959 kubelet[2352]: I0813 00:49:32.926900 2352 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:49:32.928679 kubelet[2352]: I0813 00:49:32.928657 2352 policy_none.go:49] "None policy: Start" Aug 13 00:49:32.929557 kubelet[2352]: I0813 00:49:32.929505 2352 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:49:32.929599 kubelet[2352]: I0813 00:49:32.929562 2352 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:49:32.933182 kubelet[2352]: W0813 00:49:32.933119 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.199.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.199.101:6443: connect: connection refused Aug 13 00:49:32.933182 kubelet[2352]: E0813 00:49:32.933162 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.199.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.199.101:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:49:32.936861 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:49:32.950590 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:49:32.953900 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:49:32.963297 kubelet[2352]: I0813 00:49:32.963278 2352 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:49:32.963610 kubelet[2352]: I0813 00:49:32.963596 2352 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:49:32.963686 kubelet[2352]: I0813 00:49:32.963660 2352 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:49:32.964091 kubelet[2352]: I0813 00:49:32.964079 2352 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:49:32.965244 kubelet[2352]: E0813 00:49:32.965202 2352 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-199-101\" not found" Aug 13 00:49:33.035298 systemd[1]: Created slice kubepods-burstable-podc54207a78989b998dd1f6033c818d493.slice - libcontainer container kubepods-burstable-podc54207a78989b998dd1f6033c818d493.slice. Aug 13 00:49:33.065058 kubelet[2352]: I0813 00:49:33.065038 2352 kubelet_node_status.go:72] "Attempting to register node" node="172-234-199-101" Aug 13 00:49:33.065457 kubelet[2352]: E0813 00:49:33.065426 2352 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.199.101:6443/api/v1/nodes\": dial tcp 172.234.199.101:6443: connect: connection refused" node="172-234-199-101" Aug 13 00:49:33.067076 systemd[1]: Created slice kubepods-burstable-pod011977b58a6487cb537f1a7218f983b2.slice - libcontainer container kubepods-burstable-pod011977b58a6487cb537f1a7218f983b2.slice. Aug 13 00:49:33.080244 systemd[1]: Created slice kubepods-burstable-pod97f7debad30380a392b5cd05abad2964.slice - libcontainer container kubepods-burstable-pod97f7debad30380a392b5cd05abad2964.slice. Aug 13 00:49:33.104280 kubelet[2352]: E0813 00:49:33.104186 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.199.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-199-101?timeout=10s\": dial tcp 172.234.199.101:6443: connect: connection refused" interval="400ms" Aug 13 00:49:33.204599 kubelet[2352]: I0813 00:49:33.204553 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97f7debad30380a392b5cd05abad2964-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-199-101\" (UID: \"97f7debad30380a392b5cd05abad2964\") " pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:49:33.204599 kubelet[2352]: I0813 00:49:33.204582 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c54207a78989b998dd1f6033c818d493-ca-certs\") pod \"kube-controller-manager-172-234-199-101\" (UID: \"c54207a78989b998dd1f6033c818d493\") " pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:49:33.204599 kubelet[2352]: I0813 00:49:33.204596 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/011977b58a6487cb537f1a7218f983b2-kubeconfig\") pod \"kube-scheduler-172-234-199-101\" (UID: \"011977b58a6487cb537f1a7218f983b2\") " pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:49:33.204709 kubelet[2352]: I0813 00:49:33.204610 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97f7debad30380a392b5cd05abad2964-ca-certs\") pod \"kube-apiserver-172-234-199-101\" (UID: \"97f7debad30380a392b5cd05abad2964\") " pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:49:33.204709 kubelet[2352]: I0813 00:49:33.204621 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97f7debad30380a392b5cd05abad2964-k8s-certs\") pod \"kube-apiserver-172-234-199-101\" (UID: \"97f7debad30380a392b5cd05abad2964\") " pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:49:33.204709 kubelet[2352]: I0813 00:49:33.204638 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c54207a78989b998dd1f6033c818d493-flexvolume-dir\") pod \"kube-controller-manager-172-234-199-101\" (UID: \"c54207a78989b998dd1f6033c818d493\") " pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:49:33.204709 kubelet[2352]: I0813 00:49:33.204649 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c54207a78989b998dd1f6033c818d493-k8s-certs\") pod \"kube-controller-manager-172-234-199-101\" (UID: \"c54207a78989b998dd1f6033c818d493\") " pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:49:33.204709 kubelet[2352]: I0813 00:49:33.204660 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c54207a78989b998dd1f6033c818d493-kubeconfig\") pod \"kube-controller-manager-172-234-199-101\" (UID: \"c54207a78989b998dd1f6033c818d493\") " pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:49:33.204851 kubelet[2352]: I0813 00:49:33.204680 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c54207a78989b998dd1f6033c818d493-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-199-101\" (UID: \"c54207a78989b998dd1f6033c818d493\") " pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:49:33.267399 kubelet[2352]: I0813 00:49:33.267382 2352 kubelet_node_status.go:72] "Attempting to register node" node="172-234-199-101" Aug 13 00:49:33.267668 kubelet[2352]: E0813 00:49:33.267651 2352 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.199.101:6443/api/v1/nodes\": dial tcp 172.234.199.101:6443: connect: connection refused" node="172-234-199-101" Aug 13 00:49:33.364113 kubelet[2352]: E0813 00:49:33.364068 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:33.364644 containerd[1575]: time="2025-08-13T00:49:33.364561112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-199-101,Uid:c54207a78989b998dd1f6033c818d493,Namespace:kube-system,Attempt:0,}" Aug 13 00:49:33.380939 kubelet[2352]: E0813 00:49:33.380178 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:33.380939 kubelet[2352]: E0813 00:49:33.382807 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:33.385215 containerd[1575]: time="2025-08-13T00:49:33.385178522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-199-101,Uid:97f7debad30380a392b5cd05abad2964,Namespace:kube-system,Attempt:0,}" Aug 13 00:49:33.393351 containerd[1575]: time="2025-08-13T00:49:33.393294412Z" level=info msg="connecting to shim 97e50ad64ff17d377df98cd1f00fc1182ded8195625f9de55ea2eb3ac3c5f894" address="unix:///run/containerd/s/ef2bf0ae8ee991e7071cd0149c1c1df30a659bb16cc16f28dc678a07ed0e4035" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:49:33.394243 containerd[1575]: time="2025-08-13T00:49:33.394216552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-199-101,Uid:011977b58a6487cb537f1a7218f983b2,Namespace:kube-system,Attempt:0,}" Aug 13 00:49:33.422814 containerd[1575]: time="2025-08-13T00:49:33.422740812Z" level=info msg="connecting to shim 3c9c023f8fd8aeeb8dd33321cdccf1785fa53209f5eba4ffaebcbfae9edfa506" address="unix:///run/containerd/s/0b74200b61fec3d418216ac785c0851120ed5dfe8e8511c95de7f5e421602ca9" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:49:33.432035 containerd[1575]: time="2025-08-13T00:49:33.431717962Z" level=info msg="connecting to shim 5aaf0e3637cdc766a6698733a3fd1c847fd3cfe748d0b68c43a9cfa1f5885847" address="unix:///run/containerd/s/9704d0f7e6ce59a9391b853efe771a05e432802adc52a32ed28fb4a60a03be69" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:49:33.436813 systemd[1]: Started cri-containerd-97e50ad64ff17d377df98cd1f00fc1182ded8195625f9de55ea2eb3ac3c5f894.scope - libcontainer container 97e50ad64ff17d377df98cd1f00fc1182ded8195625f9de55ea2eb3ac3c5f894. Aug 13 00:49:33.468801 systemd[1]: Started cri-containerd-3c9c023f8fd8aeeb8dd33321cdccf1785fa53209f5eba4ffaebcbfae9edfa506.scope - libcontainer container 3c9c023f8fd8aeeb8dd33321cdccf1785fa53209f5eba4ffaebcbfae9edfa506. Aug 13 00:49:33.474622 systemd[1]: Started cri-containerd-5aaf0e3637cdc766a6698733a3fd1c847fd3cfe748d0b68c43a9cfa1f5885847.scope - libcontainer container 5aaf0e3637cdc766a6698733a3fd1c847fd3cfe748d0b68c43a9cfa1f5885847. Aug 13 00:49:33.506162 kubelet[2352]: E0813 00:49:33.506078 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.199.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-199-101?timeout=10s\": dial tcp 172.234.199.101:6443: connect: connection refused" interval="800ms" Aug 13 00:49:33.535563 containerd[1575]: time="2025-08-13T00:49:33.535477432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-199-101,Uid:c54207a78989b998dd1f6033c818d493,Namespace:kube-system,Attempt:0,} returns sandbox id \"97e50ad64ff17d377df98cd1f00fc1182ded8195625f9de55ea2eb3ac3c5f894\"" Aug 13 00:49:33.540903 kubelet[2352]: E0813 00:49:33.540803 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:33.547387 containerd[1575]: time="2025-08-13T00:49:33.547329792Z" level=info msg="CreateContainer within sandbox \"97e50ad64ff17d377df98cd1f00fc1182ded8195625f9de55ea2eb3ac3c5f894\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:49:33.564809 containerd[1575]: time="2025-08-13T00:49:33.564749542Z" level=info msg="Container f0476a43f051868712718256864bfb53c1077e3d70921e97a1d38f4377ff153d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:33.567141 containerd[1575]: time="2025-08-13T00:49:33.567113732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-199-101,Uid:97f7debad30380a392b5cd05abad2964,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c9c023f8fd8aeeb8dd33321cdccf1785fa53209f5eba4ffaebcbfae9edfa506\"" Aug 13 00:49:33.570748 kubelet[2352]: E0813 00:49:33.570721 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:33.575446 containerd[1575]: time="2025-08-13T00:49:33.575392122Z" level=info msg="CreateContainer within sandbox \"3c9c023f8fd8aeeb8dd33321cdccf1785fa53209f5eba4ffaebcbfae9edfa506\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:49:33.583812 containerd[1575]: time="2025-08-13T00:49:33.583381152Z" level=info msg="CreateContainer within sandbox \"97e50ad64ff17d377df98cd1f00fc1182ded8195625f9de55ea2eb3ac3c5f894\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f0476a43f051868712718256864bfb53c1077e3d70921e97a1d38f4377ff153d\"" Aug 13 00:49:33.585702 containerd[1575]: time="2025-08-13T00:49:33.585669152Z" level=info msg="StartContainer for \"f0476a43f051868712718256864bfb53c1077e3d70921e97a1d38f4377ff153d\"" Aug 13 00:49:33.586793 containerd[1575]: time="2025-08-13T00:49:33.586758602Z" level=info msg="connecting to shim f0476a43f051868712718256864bfb53c1077e3d70921e97a1d38f4377ff153d" address="unix:///run/containerd/s/ef2bf0ae8ee991e7071cd0149c1c1df30a659bb16cc16f28dc678a07ed0e4035" protocol=ttrpc version=3 Aug 13 00:49:33.588546 containerd[1575]: time="2025-08-13T00:49:33.588037912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-199-101,Uid:011977b58a6487cb537f1a7218f983b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aaf0e3637cdc766a6698733a3fd1c847fd3cfe748d0b68c43a9cfa1f5885847\"" Aug 13 00:49:33.589875 kubelet[2352]: E0813 00:49:33.589840 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:33.590241 containerd[1575]: time="2025-08-13T00:49:33.590223042Z" level=info msg="Container d6f409c72b341f5bb366bd4eea96e6cb0538581aad0078802ecea208b8920d7a: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:33.593121 containerd[1575]: time="2025-08-13T00:49:33.593074502Z" level=info msg="CreateContainer within sandbox \"5aaf0e3637cdc766a6698733a3fd1c847fd3cfe748d0b68c43a9cfa1f5885847\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:49:33.595352 containerd[1575]: time="2025-08-13T00:49:33.594917182Z" level=info msg="CreateContainer within sandbox \"3c9c023f8fd8aeeb8dd33321cdccf1785fa53209f5eba4ffaebcbfae9edfa506\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6f409c72b341f5bb366bd4eea96e6cb0538581aad0078802ecea208b8920d7a\"" Aug 13 00:49:33.595564 containerd[1575]: time="2025-08-13T00:49:33.595534572Z" level=info msg="StartContainer for \"d6f409c72b341f5bb366bd4eea96e6cb0538581aad0078802ecea208b8920d7a\"" Aug 13 00:49:33.596396 containerd[1575]: time="2025-08-13T00:49:33.596371842Z" level=info msg="connecting to shim d6f409c72b341f5bb366bd4eea96e6cb0538581aad0078802ecea208b8920d7a" address="unix:///run/containerd/s/0b74200b61fec3d418216ac785c0851120ed5dfe8e8511c95de7f5e421602ca9" protocol=ttrpc version=3 Aug 13 00:49:33.602685 containerd[1575]: time="2025-08-13T00:49:33.602664622Z" level=info msg="Container 5d74865c157f6b115e89f47b51d95e1547fabcb47a9a13740e87b83286a42588: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:33.612012 containerd[1575]: time="2025-08-13T00:49:33.611888142Z" level=info msg="CreateContainer within sandbox \"5aaf0e3637cdc766a6698733a3fd1c847fd3cfe748d0b68c43a9cfa1f5885847\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5d74865c157f6b115e89f47b51d95e1547fabcb47a9a13740e87b83286a42588\"" Aug 13 00:49:33.613989 containerd[1575]: time="2025-08-13T00:49:33.613831602Z" level=info msg="StartContainer for \"5d74865c157f6b115e89f47b51d95e1547fabcb47a9a13740e87b83286a42588\"" Aug 13 00:49:33.615679 containerd[1575]: time="2025-08-13T00:49:33.615608072Z" level=info msg="connecting to shim 5d74865c157f6b115e89f47b51d95e1547fabcb47a9a13740e87b83286a42588" address="unix:///run/containerd/s/9704d0f7e6ce59a9391b853efe771a05e432802adc52a32ed28fb4a60a03be69" protocol=ttrpc version=3 Aug 13 00:49:33.617763 systemd[1]: Started cri-containerd-f0476a43f051868712718256864bfb53c1077e3d70921e97a1d38f4377ff153d.scope - libcontainer container f0476a43f051868712718256864bfb53c1077e3d70921e97a1d38f4377ff153d. Aug 13 00:49:33.631782 systemd[1]: Started cri-containerd-d6f409c72b341f5bb366bd4eea96e6cb0538581aad0078802ecea208b8920d7a.scope - libcontainer container d6f409c72b341f5bb366bd4eea96e6cb0538581aad0078802ecea208b8920d7a. Aug 13 00:49:33.648668 systemd[1]: Started cri-containerd-5d74865c157f6b115e89f47b51d95e1547fabcb47a9a13740e87b83286a42588.scope - libcontainer container 5d74865c157f6b115e89f47b51d95e1547fabcb47a9a13740e87b83286a42588. Aug 13 00:49:33.671838 kubelet[2352]: I0813 00:49:33.671804 2352 kubelet_node_status.go:72] "Attempting to register node" node="172-234-199-101" Aug 13 00:49:33.673105 kubelet[2352]: E0813 00:49:33.672916 2352 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.199.101:6443/api/v1/nodes\": dial tcp 172.234.199.101:6443: connect: connection refused" node="172-234-199-101" Aug 13 00:49:33.734020 containerd[1575]: time="2025-08-13T00:49:33.733937162Z" level=info msg="StartContainer for \"f0476a43f051868712718256864bfb53c1077e3d70921e97a1d38f4377ff153d\" returns successfully" Aug 13 00:49:33.734608 containerd[1575]: time="2025-08-13T00:49:33.733970312Z" level=info msg="StartContainer for \"d6f409c72b341f5bb366bd4eea96e6cb0538581aad0078802ecea208b8920d7a\" returns successfully" Aug 13 00:49:33.757942 containerd[1575]: time="2025-08-13T00:49:33.757855092Z" level=info msg="StartContainer for \"5d74865c157f6b115e89f47b51d95e1547fabcb47a9a13740e87b83286a42588\" returns successfully" Aug 13 00:49:33.937960 kubelet[2352]: E0813 00:49:33.937758 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:33.940502 kubelet[2352]: E0813 00:49:33.940473 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:33.943702 kubelet[2352]: E0813 00:49:33.943674 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:34.476208 kubelet[2352]: I0813 00:49:34.476166 2352 kubelet_node_status.go:72] "Attempting to register node" node="172-234-199-101" Aug 13 00:49:34.955840 kubelet[2352]: E0813 00:49:34.955797 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:35.335896 kubelet[2352]: E0813 00:49:35.335732 2352 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-199-101\" not found" node="172-234-199-101" Aug 13 00:49:35.413650 kubelet[2352]: I0813 00:49:35.413583 2352 kubelet_node_status.go:75] "Successfully registered node" node="172-234-199-101" Aug 13 00:49:35.413650 kubelet[2352]: E0813 00:49:35.413646 2352 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172-234-199-101\": node \"172-234-199-101\" not found" Aug 13 00:49:35.454265 kubelet[2352]: E0813 00:49:35.454210 2352 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-172-234-199-101\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:49:35.454467 kubelet[2352]: E0813 00:49:35.454423 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:35.894751 kubelet[2352]: I0813 00:49:35.894710 2352 apiserver.go:52] "Watching apiserver" Aug 13 00:49:35.904189 kubelet[2352]: I0813 00:49:35.904158 2352 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:49:37.226539 systemd[1]: Reload requested from client PID 2619 ('systemctl') (unit session-7.scope)... Aug 13 00:49:37.226558 systemd[1]: Reloading... Aug 13 00:49:37.361564 zram_generator::config[2666]: No configuration found. Aug 13 00:49:37.464361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:49:37.591683 systemd[1]: Reloading finished in 364 ms. Aug 13 00:49:37.618854 kubelet[2352]: I0813 00:49:37.618767 2352 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:49:37.619024 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:49:37.636611 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:49:37.636966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:49:37.637027 systemd[1]: kubelet.service: Consumed 989ms CPU time, 128.1M memory peak. Aug 13 00:49:37.640960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:49:37.822651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:49:37.833141 (kubelet)[2714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:49:37.884668 kubelet[2714]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:49:37.886363 kubelet[2714]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:49:37.886363 kubelet[2714]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:49:37.886363 kubelet[2714]: I0813 00:49:37.885150 2714 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:49:37.894547 kubelet[2714]: I0813 00:49:37.894482 2714 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:49:37.894637 kubelet[2714]: I0813 00:49:37.894629 2714 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:49:37.895024 kubelet[2714]: I0813 00:49:37.894996 2714 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:49:37.896991 kubelet[2714]: I0813 00:49:37.896721 2714 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:49:37.898597 kubelet[2714]: I0813 00:49:37.898575 2714 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:49:37.903678 kubelet[2714]: I0813 00:49:37.903650 2714 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:49:37.910307 kubelet[2714]: I0813 00:49:37.910273 2714 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:49:37.910572 kubelet[2714]: I0813 00:49:37.910542 2714 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:49:37.911166 kubelet[2714]: I0813 00:49:37.910776 2714 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:49:37.911166 kubelet[2714]: I0813 00:49:37.910805 2714 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-199-101","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:49:37.911166 kubelet[2714]: I0813 00:49:37.910983 2714 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:49:37.911166 kubelet[2714]: I0813 00:49:37.910992 2714 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:49:37.912565 kubelet[2714]: I0813 00:49:37.911021 2714 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:49:37.912565 kubelet[2714]: I0813 00:49:37.911133 2714 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:49:37.912565 kubelet[2714]: I0813 00:49:37.911146 2714 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:49:37.912565 kubelet[2714]: I0813 00:49:37.911178 2714 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:49:37.912565 kubelet[2714]: I0813 00:49:37.911192 2714 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:49:37.912565 kubelet[2714]: I0813 00:49:37.912275 2714 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:49:37.912994 kubelet[2714]: I0813 00:49:37.912729 2714 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:49:37.913633 kubelet[2714]: I0813 00:49:37.913135 2714 server.go:1274] "Started kubelet" Aug 13 00:49:37.915614 kubelet[2714]: I0813 00:49:37.915341 2714 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:49:37.921616 kubelet[2714]: I0813 00:49:37.921589 2714 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:49:37.924683 kubelet[2714]: I0813 00:49:37.924653 2714 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:49:37.927329 kubelet[2714]: I0813 00:49:37.926710 2714 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:49:37.927329 kubelet[2714]: I0813 00:49:37.926915 2714 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:49:37.927329 kubelet[2714]: I0813 00:49:37.927120 2714 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:49:37.929643 kubelet[2714]: I0813 00:49:37.929614 2714 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:49:37.930540 kubelet[2714]: E0813 00:49:37.929820 2714 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-199-101\" not found" Aug 13 00:49:37.931810 kubelet[2714]: I0813 00:49:37.931184 2714 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:49:37.931810 kubelet[2714]: I0813 00:49:37.931384 2714 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:49:37.935655 kubelet[2714]: I0813 00:49:37.935622 2714 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:49:37.935811 kubelet[2714]: I0813 00:49:37.935739 2714 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:49:37.942754 kubelet[2714]: I0813 00:49:37.942725 2714 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:49:37.945621 kubelet[2714]: I0813 00:49:37.944971 2714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:49:37.952481 kubelet[2714]: I0813 00:49:37.952424 2714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:49:37.952481 kubelet[2714]: I0813 00:49:37.952458 2714 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:49:37.952481 kubelet[2714]: I0813 00:49:37.952475 2714 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:49:37.955041 kubelet[2714]: E0813 00:49:37.954425 2714 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:49:37.957625 kubelet[2714]: E0813 00:49:37.957587 2714 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:49:37.997446 kubelet[2714]: I0813 00:49:37.997409 2714 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:49:37.999644 kubelet[2714]: I0813 00:49:37.999608 2714 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:49:37.999723 kubelet[2714]: I0813 00:49:37.999652 2714 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:49:37.999957 kubelet[2714]: I0813 00:49:37.999928 2714 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:49:37.999996 kubelet[2714]: I0813 00:49:37.999949 2714 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:49:37.999996 kubelet[2714]: I0813 00:49:37.999973 2714 policy_none.go:49] "None policy: Start" Aug 13 00:49:38.003870 kubelet[2714]: I0813 00:49:38.002961 2714 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:49:38.003870 kubelet[2714]: I0813 00:49:38.002990 2714 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:49:38.003870 kubelet[2714]: I0813 00:49:38.003218 2714 state_mem.go:75] "Updated machine memory state" Aug 13 00:49:38.010774 kubelet[2714]: I0813 00:49:38.010753 2714 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:49:38.012806 kubelet[2714]: I0813 00:49:38.012779 2714 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:49:38.012845 kubelet[2714]: I0813 00:49:38.012802 2714 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:49:38.013841 kubelet[2714]: I0813 00:49:38.013821 2714 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:49:38.117274 kubelet[2714]: I0813 00:49:38.117236 2714 kubelet_node_status.go:72] "Attempting to register node" node="172-234-199-101" Aug 13 00:49:38.125217 kubelet[2714]: I0813 00:49:38.124991 2714 kubelet_node_status.go:111] "Node was previously registered" node="172-234-199-101" Aug 13 00:49:38.125217 kubelet[2714]: I0813 00:49:38.125059 2714 kubelet_node_status.go:75] "Successfully registered node" node="172-234-199-101" Aug 13 00:49:38.233020 kubelet[2714]: I0813 00:49:38.232817 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c54207a78989b998dd1f6033c818d493-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-199-101\" (UID: \"c54207a78989b998dd1f6033c818d493\") " pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:49:38.233020 kubelet[2714]: I0813 00:49:38.232857 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/011977b58a6487cb537f1a7218f983b2-kubeconfig\") pod \"kube-scheduler-172-234-199-101\" (UID: \"011977b58a6487cb537f1a7218f983b2\") " pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:49:38.233020 kubelet[2714]: I0813 00:49:38.232875 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c54207a78989b998dd1f6033c818d493-k8s-certs\") pod \"kube-controller-manager-172-234-199-101\" (UID: \"c54207a78989b998dd1f6033c818d493\") " pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:49:38.233020 kubelet[2714]: I0813 00:49:38.232893 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c54207a78989b998dd1f6033c818d493-kubeconfig\") pod \"kube-controller-manager-172-234-199-101\" (UID: \"c54207a78989b998dd1f6033c818d493\") " pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:49:38.233020 kubelet[2714]: I0813 00:49:38.232910 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97f7debad30380a392b5cd05abad2964-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-199-101\" (UID: \"97f7debad30380a392b5cd05abad2964\") " pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:49:38.233247 kubelet[2714]: I0813 00:49:38.232930 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c54207a78989b998dd1f6033c818d493-ca-certs\") pod \"kube-controller-manager-172-234-199-101\" (UID: \"c54207a78989b998dd1f6033c818d493\") " pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:49:38.233247 kubelet[2714]: I0813 00:49:38.232955 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c54207a78989b998dd1f6033c818d493-flexvolume-dir\") pod \"kube-controller-manager-172-234-199-101\" (UID: \"c54207a78989b998dd1f6033c818d493\") " pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:49:38.233247 kubelet[2714]: I0813 00:49:38.232969 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97f7debad30380a392b5cd05abad2964-ca-certs\") pod \"kube-apiserver-172-234-199-101\" (UID: \"97f7debad30380a392b5cd05abad2964\") " pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:49:38.233247 kubelet[2714]: I0813 00:49:38.232984 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97f7debad30380a392b5cd05abad2964-k8s-certs\") pod \"kube-apiserver-172-234-199-101\" (UID: \"97f7debad30380a392b5cd05abad2964\") " pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:49:38.364646 kubelet[2714]: E0813 00:49:38.364602 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:38.365708 kubelet[2714]: E0813 00:49:38.365672 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:38.366818 kubelet[2714]: E0813 00:49:38.366006 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:38.922886 kubelet[2714]: I0813 00:49:38.922815 2714 apiserver.go:52] "Watching apiserver" Aug 13 00:49:38.932240 kubelet[2714]: I0813 00:49:38.932190 2714 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:49:38.974344 kubelet[2714]: E0813 00:49:38.974306 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:38.976360 kubelet[2714]: E0813 00:49:38.976326 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:38.983400 kubelet[2714]: E0813 00:49:38.983358 2714 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-172-234-199-101\" already exists" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:49:38.983484 kubelet[2714]: E0813 00:49:38.983460 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:38.998230 kubelet[2714]: I0813 00:49:38.997251 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-199-101" podStartSLOduration=0.997227602 podStartE2EDuration="997.227602ms" podCreationTimestamp="2025-08-13 00:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:49:38.997099962 +0000 UTC m=+1.158920081" watchObservedRunningTime="2025-08-13 00:49:38.997227602 +0000 UTC m=+1.159047721" Aug 13 00:49:39.004911 kubelet[2714]: I0813 00:49:39.004449 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-199-101" podStartSLOduration=1.004430122 podStartE2EDuration="1.004430122s" podCreationTimestamp="2025-08-13 00:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:49:39.004305802 +0000 UTC m=+1.166125921" watchObservedRunningTime="2025-08-13 00:49:39.004430122 +0000 UTC m=+1.166250241" Aug 13 00:49:39.813710 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:49:39.975189 kubelet[2714]: E0813 00:49:39.975143 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:39.975741 kubelet[2714]: E0813 00:49:39.975146 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:41.300446 kubelet[2714]: E0813 00:49:41.300416 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:41.660958 kubelet[2714]: E0813 00:49:41.660868 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:42.373441 kubelet[2714]: I0813 00:49:42.373397 2714 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:49:42.375375 containerd[1575]: time="2025-08-13T00:49:42.375348237Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:49:42.377881 kubelet[2714]: I0813 00:49:42.377665 2714 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:49:43.126932 kubelet[2714]: I0813 00:49:43.125878 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-199-101" podStartSLOduration=5.125860936 podStartE2EDuration="5.125860936s" podCreationTimestamp="2025-08-13 00:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:49:39.013053202 +0000 UTC m=+1.174873321" watchObservedRunningTime="2025-08-13 00:49:43.125860936 +0000 UTC m=+5.287681055" Aug 13 00:49:43.144718 systemd[1]: Created slice kubepods-besteffort-pod07478ebb_45ed_4651_b4fe_673fa76a7201.slice - libcontainer container kubepods-besteffort-pod07478ebb_45ed_4651_b4fe_673fa76a7201.slice. Aug 13 00:49:43.164696 kubelet[2714]: I0813 00:49:43.164653 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg6vq\" (UniqueName: \"kubernetes.io/projected/07478ebb-45ed-4651-b4fe-673fa76a7201-kube-api-access-hg6vq\") pod \"kube-proxy-vxgfg\" (UID: \"07478ebb-45ed-4651-b4fe-673fa76a7201\") " pod="kube-system/kube-proxy-vxgfg" Aug 13 00:49:43.164696 kubelet[2714]: I0813 00:49:43.164688 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/07478ebb-45ed-4651-b4fe-673fa76a7201-kube-proxy\") pod \"kube-proxy-vxgfg\" (UID: \"07478ebb-45ed-4651-b4fe-673fa76a7201\") " pod="kube-system/kube-proxy-vxgfg" Aug 13 00:49:43.164696 kubelet[2714]: I0813 00:49:43.164705 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07478ebb-45ed-4651-b4fe-673fa76a7201-xtables-lock\") pod \"kube-proxy-vxgfg\" (UID: \"07478ebb-45ed-4651-b4fe-673fa76a7201\") " pod="kube-system/kube-proxy-vxgfg" Aug 13 00:49:43.164857 kubelet[2714]: I0813 00:49:43.164728 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07478ebb-45ed-4651-b4fe-673fa76a7201-lib-modules\") pod \"kube-proxy-vxgfg\" (UID: \"07478ebb-45ed-4651-b4fe-673fa76a7201\") " pod="kube-system/kube-proxy-vxgfg" Aug 13 00:49:43.274872 systemd[1]: Created slice kubepods-besteffort-pod1dcdd14a_c7af_4c1f_8a8f_37db562cb94a.slice - libcontainer container kubepods-besteffort-pod1dcdd14a_c7af_4c1f_8a8f_37db562cb94a.slice. Aug 13 00:49:43.366202 kubelet[2714]: I0813 00:49:43.366089 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh57g\" (UniqueName: \"kubernetes.io/projected/1dcdd14a-c7af-4c1f-8a8f-37db562cb94a-kube-api-access-dh57g\") pod \"tigera-operator-5bf8dfcb4-jlhrh\" (UID: \"1dcdd14a-c7af-4c1f-8a8f-37db562cb94a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-jlhrh" Aug 13 00:49:43.366462 kubelet[2714]: I0813 00:49:43.366369 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1dcdd14a-c7af-4c1f-8a8f-37db562cb94a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-jlhrh\" (UID: \"1dcdd14a-c7af-4c1f-8a8f-37db562cb94a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-jlhrh" Aug 13 00:49:43.462538 kubelet[2714]: E0813 00:49:43.462372 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:43.463970 containerd[1575]: time="2025-08-13T00:49:43.463606843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vxgfg,Uid:07478ebb-45ed-4651-b4fe-673fa76a7201,Namespace:kube-system,Attempt:0,}" Aug 13 00:49:43.490273 containerd[1575]: time="2025-08-13T00:49:43.490102618Z" level=info msg="connecting to shim e7cb45fd3fde5672419dd127e9891e214fde83f11acb50a29e9d31bbd18989e1" address="unix:///run/containerd/s/866398dc763a15f167580d28383d60cd367edbfcfcc5920487c00d68dbf90fbd" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:49:43.516844 systemd[1]: Started cri-containerd-e7cb45fd3fde5672419dd127e9891e214fde83f11acb50a29e9d31bbd18989e1.scope - libcontainer container e7cb45fd3fde5672419dd127e9891e214fde83f11acb50a29e9d31bbd18989e1. Aug 13 00:49:43.545064 containerd[1575]: time="2025-08-13T00:49:43.545028840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vxgfg,Uid:07478ebb-45ed-4651-b4fe-673fa76a7201,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7cb45fd3fde5672419dd127e9891e214fde83f11acb50a29e9d31bbd18989e1\"" Aug 13 00:49:43.546087 kubelet[2714]: E0813 00:49:43.546067 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:43.548444 containerd[1575]: time="2025-08-13T00:49:43.548422028Z" level=info msg="CreateContainer within sandbox \"e7cb45fd3fde5672419dd127e9891e214fde83f11acb50a29e9d31bbd18989e1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:49:43.558844 containerd[1575]: time="2025-08-13T00:49:43.558820052Z" level=info msg="Container 8f978433ae75ff6bb09ce178bdbedcaff0bde8e9a161a1d1b071f9b6659e786e: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:43.563776 containerd[1575]: time="2025-08-13T00:49:43.563746624Z" level=info msg="CreateContainer within sandbox \"e7cb45fd3fde5672419dd127e9891e214fde83f11acb50a29e9d31bbd18989e1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f978433ae75ff6bb09ce178bdbedcaff0bde8e9a161a1d1b071f9b6659e786e\"" Aug 13 00:49:43.564635 containerd[1575]: time="2025-08-13T00:49:43.564330721Z" level=info msg="StartContainer for \"8f978433ae75ff6bb09ce178bdbedcaff0bde8e9a161a1d1b071f9b6659e786e\"" Aug 13 00:49:43.565713 containerd[1575]: time="2025-08-13T00:49:43.565681415Z" level=info msg="connecting to shim 8f978433ae75ff6bb09ce178bdbedcaff0bde8e9a161a1d1b071f9b6659e786e" address="unix:///run/containerd/s/866398dc763a15f167580d28383d60cd367edbfcfcc5920487c00d68dbf90fbd" protocol=ttrpc version=3 Aug 13 00:49:43.583651 systemd[1]: Started cri-containerd-8f978433ae75ff6bb09ce178bdbedcaff0bde8e9a161a1d1b071f9b6659e786e.scope - libcontainer container 8f978433ae75ff6bb09ce178bdbedcaff0bde8e9a161a1d1b071f9b6659e786e. Aug 13 00:49:43.585964 containerd[1575]: time="2025-08-13T00:49:43.585944272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-jlhrh,Uid:1dcdd14a-c7af-4c1f-8a8f-37db562cb94a,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:49:43.600072 containerd[1575]: time="2025-08-13T00:49:43.600018468Z" level=info msg="connecting to shim 66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3" address="unix:///run/containerd/s/6b9d0e73c2951b50df09ed98f13a0b2b722cac1526d8e58ca389d63b26ee1faa" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:49:43.630659 systemd[1]: Started cri-containerd-66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3.scope - libcontainer container 66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3. Aug 13 00:49:43.650321 containerd[1575]: time="2025-08-13T00:49:43.650267143Z" level=info msg="StartContainer for \"8f978433ae75ff6bb09ce178bdbedcaff0bde8e9a161a1d1b071f9b6659e786e\" returns successfully" Aug 13 00:49:43.692634 containerd[1575]: time="2025-08-13T00:49:43.692505660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-jlhrh,Uid:1dcdd14a-c7af-4c1f-8a8f-37db562cb94a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\"" Aug 13 00:49:43.697819 containerd[1575]: time="2025-08-13T00:49:43.697773993Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:49:43.994655 kubelet[2714]: E0813 00:49:43.994605 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:44.006151 kubelet[2714]: I0813 00:49:44.005977 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vxgfg" podStartSLOduration=1.005958476 podStartE2EDuration="1.005958476s" podCreationTimestamp="2025-08-13 00:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:49:44.00532142 +0000 UTC m=+6.167141539" watchObservedRunningTime="2025-08-13 00:49:44.005958476 +0000 UTC m=+6.167778595" Aug 13 00:49:44.804106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3765334661.mount: Deactivated successfully. Aug 13 00:49:44.998299 kubelet[2714]: E0813 00:49:44.998260 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:45.187361 kubelet[2714]: E0813 00:49:45.186452 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:45.267132 containerd[1575]: time="2025-08-13T00:49:45.267076039Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:45.268316 containerd[1575]: time="2025-08-13T00:49:45.267774505Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 00:49:45.268652 containerd[1575]: time="2025-08-13T00:49:45.268599984Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:45.270448 containerd[1575]: time="2025-08-13T00:49:45.270427783Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:45.271423 containerd[1575]: time="2025-08-13T00:49:45.271039433Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.573221092s" Aug 13 00:49:45.271423 containerd[1575]: time="2025-08-13T00:49:45.271070871Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:49:45.274298 containerd[1575]: time="2025-08-13T00:49:45.274087362Z" level=info msg="CreateContainer within sandbox \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:49:45.279821 containerd[1575]: time="2025-08-13T00:49:45.279785209Z" level=info msg="Container c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:45.289071 containerd[1575]: time="2025-08-13T00:49:45.289032511Z" level=info msg="CreateContainer within sandbox \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\"" Aug 13 00:49:45.289819 containerd[1575]: time="2025-08-13T00:49:45.289775654Z" level=info msg="StartContainer for \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\"" Aug 13 00:49:45.290498 containerd[1575]: time="2025-08-13T00:49:45.290445431Z" level=info msg="connecting to shim c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e" address="unix:///run/containerd/s/6b9d0e73c2951b50df09ed98f13a0b2b722cac1526d8e58ca389d63b26ee1faa" protocol=ttrpc version=3 Aug 13 00:49:45.318667 systemd[1]: Started cri-containerd-c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e.scope - libcontainer container c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e. Aug 13 00:49:45.351952 containerd[1575]: time="2025-08-13T00:49:45.351907043Z" level=info msg="StartContainer for \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" returns successfully" Aug 13 00:49:46.001158 kubelet[2714]: E0813 00:49:46.001113 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:50.837098 sudo[1812]: pam_unix(sudo:session): session closed for user root Aug 13 00:49:50.889545 sshd[1811]: Connection closed by 147.75.109.163 port 49236 Aug 13 00:49:50.890053 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:50.895087 systemd[1]: sshd@6-172.234.199.101:22-147.75.109.163:49236.service: Deactivated successfully. Aug 13 00:49:50.898938 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:49:50.899378 systemd[1]: session-7.scope: Consumed 3.547s CPU time, 226.2M memory peak. Aug 13 00:49:50.901222 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:49:50.904892 systemd-logind[1528]: Removed session 7. Aug 13 00:49:51.306569 kubelet[2714]: E0813 00:49:51.306284 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:51.323318 kubelet[2714]: I0813 00:49:51.323208 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-jlhrh" podStartSLOduration=6.747837231 podStartE2EDuration="8.323196451s" podCreationTimestamp="2025-08-13 00:49:43 +0000 UTC" firstStartedPulling="2025-08-13 00:49:43.696982018 +0000 UTC m=+5.858802137" lastFinishedPulling="2025-08-13 00:49:45.272341238 +0000 UTC m=+7.434161357" observedRunningTime="2025-08-13 00:49:46.024788786 +0000 UTC m=+8.186608915" watchObservedRunningTime="2025-08-13 00:49:51.323196451 +0000 UTC m=+13.485016570" Aug 13 00:49:51.690680 kubelet[2714]: E0813 00:49:51.690430 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:52.014915 kubelet[2714]: E0813 00:49:52.014818 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:53.568781 systemd[1]: Created slice kubepods-besteffort-podd6645b68_9442_4e2c_a2e3_9a512d1a8a2e.slice - libcontainer container kubepods-besteffort-podd6645b68_9442_4e2c_a2e3_9a512d1a8a2e.slice. Aug 13 00:49:53.635680 kubelet[2714]: I0813 00:49:53.635642 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6645b68-9442-4e2c-a2e3-9a512d1a8a2e-tigera-ca-bundle\") pod \"calico-typha-644589c98-5v7wp\" (UID: \"d6645b68-9442-4e2c-a2e3-9a512d1a8a2e\") " pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:49:53.635680 kubelet[2714]: I0813 00:49:53.635682 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngr88\" (UniqueName: \"kubernetes.io/projected/d6645b68-9442-4e2c-a2e3-9a512d1a8a2e-kube-api-access-ngr88\") pod \"calico-typha-644589c98-5v7wp\" (UID: \"d6645b68-9442-4e2c-a2e3-9a512d1a8a2e\") " pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:49:53.636078 kubelet[2714]: I0813 00:49:53.635703 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d6645b68-9442-4e2c-a2e3-9a512d1a8a2e-typha-certs\") pod \"calico-typha-644589c98-5v7wp\" (UID: \"d6645b68-9442-4e2c-a2e3-9a512d1a8a2e\") " pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:49:53.876540 kubelet[2714]: E0813 00:49:53.876487 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:53.878116 containerd[1575]: time="2025-08-13T00:49:53.877705000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-644589c98-5v7wp,Uid:d6645b68-9442-4e2c-a2e3-9a512d1a8a2e,Namespace:calico-system,Attempt:0,}" Aug 13 00:49:53.884191 systemd[1]: Created slice kubepods-besteffort-podab709cf9_e61c_420b_90c5_1c0355308621.slice - libcontainer container kubepods-besteffort-podab709cf9_e61c_420b_90c5_1c0355308621.slice. Aug 13 00:49:53.922059 containerd[1575]: time="2025-08-13T00:49:53.921974170Z" level=info msg="connecting to shim f4dc011f92297f78e36f94b7dbb7843b8ccbeba56b97dae72518dd4bf406ab01" address="unix:///run/containerd/s/0e004117ba161e07c2371b1e8bb2699165ff1ea3269bd8625cad0d19bb5c9a86" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:49:53.937710 kubelet[2714]: I0813 00:49:53.937512 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab709cf9-e61c-420b-90c5-1c0355308621-lib-modules\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.937710 kubelet[2714]: I0813 00:49:53.937568 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ab709cf9-e61c-420b-90c5-1c0355308621-cni-net-dir\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.937710 kubelet[2714]: I0813 00:49:53.937584 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ab709cf9-e61c-420b-90c5-1c0355308621-node-certs\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.937710 kubelet[2714]: I0813 00:49:53.937597 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ab709cf9-e61c-420b-90c5-1c0355308621-var-run-calico\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.937710 kubelet[2714]: I0813 00:49:53.937609 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmm4k\" (UniqueName: \"kubernetes.io/projected/ab709cf9-e61c-420b-90c5-1c0355308621-kube-api-access-kmm4k\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.937897 kubelet[2714]: I0813 00:49:53.937624 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ab709cf9-e61c-420b-90c5-1c0355308621-flexvol-driver-host\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.937897 kubelet[2714]: I0813 00:49:53.937636 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab709cf9-e61c-420b-90c5-1c0355308621-xtables-lock\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.937897 kubelet[2714]: I0813 00:49:53.937647 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ab709cf9-e61c-420b-90c5-1c0355308621-cni-log-dir\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.937897 kubelet[2714]: I0813 00:49:53.937665 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab709cf9-e61c-420b-90c5-1c0355308621-tigera-ca-bundle\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.937897 kubelet[2714]: I0813 00:49:53.937680 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab709cf9-e61c-420b-90c5-1c0355308621-var-lib-calico\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.938003 kubelet[2714]: I0813 00:49:53.937787 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ab709cf9-e61c-420b-90c5-1c0355308621-cni-bin-dir\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.938003 kubelet[2714]: I0813 00:49:53.937815 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ab709cf9-e61c-420b-90c5-1c0355308621-policysync\") pod \"calico-node-x7x94\" (UID: \"ab709cf9-e61c-420b-90c5-1c0355308621\") " pod="calico-system/calico-node-x7x94" Aug 13 00:49:53.955804 systemd[1]: Started cri-containerd-f4dc011f92297f78e36f94b7dbb7843b8ccbeba56b97dae72518dd4bf406ab01.scope - libcontainer container f4dc011f92297f78e36f94b7dbb7843b8ccbeba56b97dae72518dd4bf406ab01. Aug 13 00:49:54.032007 containerd[1575]: time="2025-08-13T00:49:54.031963904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-644589c98-5v7wp,Uid:d6645b68-9442-4e2c-a2e3-9a512d1a8a2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4dc011f92297f78e36f94b7dbb7843b8ccbeba56b97dae72518dd4bf406ab01\"" Aug 13 00:49:54.034191 kubelet[2714]: E0813 00:49:54.034140 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:54.035925 containerd[1575]: time="2025-08-13T00:49:54.035897225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:49:54.046625 kubelet[2714]: E0813 00:49:54.043478 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.046625 kubelet[2714]: W0813 00:49:54.043690 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.046625 kubelet[2714]: E0813 00:49:54.043710 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.046625 kubelet[2714]: E0813 00:49:54.044603 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.046625 kubelet[2714]: W0813 00:49:54.044613 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.046625 kubelet[2714]: E0813 00:49:54.044622 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.046625 kubelet[2714]: E0813 00:49:54.045509 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.046625 kubelet[2714]: W0813 00:49:54.045622 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.046625 kubelet[2714]: E0813 00:49:54.045633 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.047061 kubelet[2714]: E0813 00:49:54.046921 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.047061 kubelet[2714]: W0813 00:49:54.046935 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.047331 kubelet[2714]: E0813 00:49:54.047278 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.051536 kubelet[2714]: E0813 00:49:54.047950 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.051536 kubelet[2714]: W0813 00:49:54.047963 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.051536 kubelet[2714]: E0813 00:49:54.047972 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.052122 kubelet[2714]: E0813 00:49:54.052105 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.052411 kubelet[2714]: W0813 00:49:54.052369 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.052411 kubelet[2714]: E0813 00:49:54.052388 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.061250 kubelet[2714]: E0813 00:49:54.061233 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.061250 kubelet[2714]: W0813 00:49:54.061246 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.061333 kubelet[2714]: E0813 00:49:54.061264 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.150792 kubelet[2714]: E0813 00:49:54.150200 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:49:54.192796 containerd[1575]: time="2025-08-13T00:49:54.192765434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x7x94,Uid:ab709cf9-e61c-420b-90c5-1c0355308621,Namespace:calico-system,Attempt:0,}" Aug 13 00:49:54.210277 containerd[1575]: time="2025-08-13T00:49:54.210230950Z" level=info msg="connecting to shim 1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e" address="unix:///run/containerd/s/2f64bef4556efd88bdc0bed0d4eac38e4fdfa25b706abb4b5ce44183cd686752" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:49:54.223786 kubelet[2714]: E0813 00:49:54.223755 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.223786 kubelet[2714]: W0813 00:49:54.223777 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.223880 kubelet[2714]: E0813 00:49:54.223822 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.224373 kubelet[2714]: E0813 00:49:54.224263 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.224373 kubelet[2714]: W0813 00:49:54.224275 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.224373 kubelet[2714]: E0813 00:49:54.224283 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.224708 kubelet[2714]: E0813 00:49:54.224682 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.224708 kubelet[2714]: W0813 00:49:54.224697 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.224708 kubelet[2714]: E0813 00:49:54.224705 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.224946 kubelet[2714]: E0813 00:49:54.224913 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.224946 kubelet[2714]: W0813 00:49:54.224943 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.224998 kubelet[2714]: E0813 00:49:54.224952 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.225401 kubelet[2714]: E0813 00:49:54.225361 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.225401 kubelet[2714]: W0813 00:49:54.225373 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.225401 kubelet[2714]: E0813 00:49:54.225381 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.225625 kubelet[2714]: E0813 00:49:54.225595 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.225625 kubelet[2714]: W0813 00:49:54.225607 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.225625 kubelet[2714]: E0813 00:49:54.225615 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.225817 kubelet[2714]: E0813 00:49:54.225786 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.225817 kubelet[2714]: W0813 00:49:54.225794 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.225817 kubelet[2714]: E0813 00:49:54.225801 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.226382 kubelet[2714]: E0813 00:49:54.226360 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.226382 kubelet[2714]: W0813 00:49:54.226374 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.226382 kubelet[2714]: E0813 00:49:54.226382 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.226753 kubelet[2714]: E0813 00:49:54.226703 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.226753 kubelet[2714]: W0813 00:49:54.226716 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.226753 kubelet[2714]: E0813 00:49:54.226723 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.227055 kubelet[2714]: E0813 00:49:54.226903 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.227055 kubelet[2714]: W0813 00:49:54.226913 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.227055 kubelet[2714]: E0813 00:49:54.226923 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.227155 kubelet[2714]: E0813 00:49:54.227107 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.227155 kubelet[2714]: W0813 00:49:54.227114 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.227155 kubelet[2714]: E0813 00:49:54.227121 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.227688 kubelet[2714]: E0813 00:49:54.227310 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.227688 kubelet[2714]: W0813 00:49:54.227321 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.227688 kubelet[2714]: E0813 00:49:54.227348 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.227688 kubelet[2714]: E0813 00:49:54.227595 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.227688 kubelet[2714]: W0813 00:49:54.227603 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.227688 kubelet[2714]: E0813 00:49:54.227610 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.227847 kubelet[2714]: E0813 00:49:54.227836 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.227847 kubelet[2714]: W0813 00:49:54.227844 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.227885 kubelet[2714]: E0813 00:49:54.227869 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.228321 kubelet[2714]: E0813 00:49:54.228217 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.228321 kubelet[2714]: W0813 00:49:54.228228 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.228321 kubelet[2714]: E0813 00:49:54.228235 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.228683 kubelet[2714]: E0813 00:49:54.228552 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.228683 kubelet[2714]: W0813 00:49:54.228566 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.228683 kubelet[2714]: E0813 00:49:54.228585 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.228790 kubelet[2714]: E0813 00:49:54.228742 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.228790 kubelet[2714]: W0813 00:49:54.228749 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.228790 kubelet[2714]: E0813 00:49:54.228756 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.229715 kubelet[2714]: E0813 00:49:54.228934 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.229715 kubelet[2714]: W0813 00:49:54.228947 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.229715 kubelet[2714]: E0813 00:49:54.228972 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.229715 kubelet[2714]: E0813 00:49:54.229131 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.229715 kubelet[2714]: W0813 00:49:54.229139 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.229715 kubelet[2714]: E0813 00:49:54.229145 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.229715 kubelet[2714]: E0813 00:49:54.229302 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.229715 kubelet[2714]: W0813 00:49:54.229309 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.229715 kubelet[2714]: E0813 00:49:54.229333 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.239661 systemd[1]: Started cri-containerd-1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e.scope - libcontainer container 1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e. Aug 13 00:49:54.242707 kubelet[2714]: E0813 00:49:54.242672 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.242707 kubelet[2714]: W0813 00:49:54.242691 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.242707 kubelet[2714]: E0813 00:49:54.242707 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.242788 kubelet[2714]: I0813 00:49:54.242728 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7697ce71-aa40-4c78-acaa-c59079720a2c-varrun\") pod \"csi-node-driver-mmxc6\" (UID: \"7697ce71-aa40-4c78-acaa-c59079720a2c\") " pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:49:54.243765 kubelet[2714]: E0813 00:49:54.243742 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.243765 kubelet[2714]: W0813 00:49:54.243760 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.243845 kubelet[2714]: E0813 00:49:54.243770 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.243845 kubelet[2714]: I0813 00:49:54.243784 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7697ce71-aa40-4c78-acaa-c59079720a2c-kubelet-dir\") pod \"csi-node-driver-mmxc6\" (UID: \"7697ce71-aa40-4c78-acaa-c59079720a2c\") " pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:49:54.244219 kubelet[2714]: E0813 00:49:54.244167 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.244320 kubelet[2714]: W0813 00:49:54.244298 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.244349 kubelet[2714]: E0813 00:49:54.244329 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.244349 kubelet[2714]: I0813 00:49:54.244343 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7697ce71-aa40-4c78-acaa-c59079720a2c-socket-dir\") pod \"csi-node-driver-mmxc6\" (UID: \"7697ce71-aa40-4c78-acaa-c59079720a2c\") " pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:49:54.244910 kubelet[2714]: E0813 00:49:54.244884 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.244910 kubelet[2714]: W0813 00:49:54.244901 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.244968 kubelet[2714]: E0813 00:49:54.244937 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.244968 kubelet[2714]: I0813 00:49:54.244952 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7697ce71-aa40-4c78-acaa-c59079720a2c-registration-dir\") pod \"csi-node-driver-mmxc6\" (UID: \"7697ce71-aa40-4c78-acaa-c59079720a2c\") " pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:49:54.245350 kubelet[2714]: E0813 00:49:54.245328 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.245501 kubelet[2714]: W0813 00:49:54.245465 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.245907 kubelet[2714]: E0813 00:49:54.245555 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.246011 kubelet[2714]: E0813 00:49:54.245986 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.246011 kubelet[2714]: W0813 00:49:54.246002 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.246245 kubelet[2714]: E0813 00:49:54.246187 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.247064 kubelet[2714]: E0813 00:49:54.246990 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.247064 kubelet[2714]: W0813 00:49:54.247004 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.247265 kubelet[2714]: E0813 00:49:54.247223 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.247700 kubelet[2714]: E0813 00:49:54.247558 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.247700 kubelet[2714]: W0813 00:49:54.247569 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.247801 kubelet[2714]: E0813 00:49:54.247788 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.248574 kubelet[2714]: E0813 00:49:54.248550 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.248574 kubelet[2714]: W0813 00:49:54.248561 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.248819 kubelet[2714]: E0813 00:49:54.248739 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.248819 kubelet[2714]: I0813 00:49:54.248769 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz475\" (UniqueName: \"kubernetes.io/projected/7697ce71-aa40-4c78-acaa-c59079720a2c-kube-api-access-bz475\") pod \"csi-node-driver-mmxc6\" (UID: \"7697ce71-aa40-4c78-acaa-c59079720a2c\") " pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:49:54.249694 kubelet[2714]: E0813 00:49:54.249657 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.249933 kubelet[2714]: W0813 00:49:54.249752 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.250191 kubelet[2714]: E0813 00:49:54.250086 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.250191 kubelet[2714]: W0813 00:49:54.250095 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.250191 kubelet[2714]: E0813 00:49:54.250104 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.250191 kubelet[2714]: E0813 00:49:54.250115 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.250967 kubelet[2714]: E0813 00:49:54.250679 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.250967 kubelet[2714]: W0813 00:49:54.250690 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.250967 kubelet[2714]: E0813 00:49:54.250699 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.251388 kubelet[2714]: E0813 00:49:54.251296 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.251388 kubelet[2714]: W0813 00:49:54.251307 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.251388 kubelet[2714]: E0813 00:49:54.251315 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.252208 kubelet[2714]: E0813 00:49:54.252063 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.252208 kubelet[2714]: W0813 00:49:54.252073 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.252208 kubelet[2714]: E0813 00:49:54.252082 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.252700 kubelet[2714]: E0813 00:49:54.252689 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.252774 kubelet[2714]: W0813 00:49:54.252748 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.252774 kubelet[2714]: E0813 00:49:54.252761 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.295402 containerd[1575]: time="2025-08-13T00:49:54.295288891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x7x94,Uid:ab709cf9-e61c-420b-90c5-1c0355308621,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e\"" Aug 13 00:49:54.352958 kubelet[2714]: E0813 00:49:54.352930 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.352958 kubelet[2714]: W0813 00:49:54.352948 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.352958 kubelet[2714]: E0813 00:49:54.352962 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.353202 kubelet[2714]: E0813 00:49:54.353185 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.353202 kubelet[2714]: W0813 00:49:54.353197 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.353287 kubelet[2714]: E0813 00:49:54.353217 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.353440 kubelet[2714]: E0813 00:49:54.353423 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.353440 kubelet[2714]: W0813 00:49:54.353436 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.353492 kubelet[2714]: E0813 00:49:54.353457 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.353736 kubelet[2714]: E0813 00:49:54.353706 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.353736 kubelet[2714]: W0813 00:49:54.353721 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.353814 kubelet[2714]: E0813 00:49:54.353742 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.353950 kubelet[2714]: E0813 00:49:54.353935 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.353950 kubelet[2714]: W0813 00:49:54.353947 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.354002 kubelet[2714]: E0813 00:49:54.353969 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.354183 kubelet[2714]: E0813 00:49:54.354169 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.354213 kubelet[2714]: W0813 00:49:54.354179 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.354213 kubelet[2714]: E0813 00:49:54.354204 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.354389 kubelet[2714]: E0813 00:49:54.354375 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.354389 kubelet[2714]: W0813 00:49:54.354385 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.354460 kubelet[2714]: E0813 00:49:54.354445 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.354646 kubelet[2714]: E0813 00:49:54.354632 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.354646 kubelet[2714]: W0813 00:49:54.354642 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.354791 kubelet[2714]: E0813 00:49:54.354750 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.354857 kubelet[2714]: E0813 00:49:54.354843 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.354857 kubelet[2714]: W0813 00:49:54.354853 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.354929 kubelet[2714]: E0813 00:49:54.354915 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.355131 kubelet[2714]: E0813 00:49:54.355105 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.355131 kubelet[2714]: W0813 00:49:54.355125 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.355131 kubelet[2714]: E0813 00:49:54.355162 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.355343 kubelet[2714]: E0813 00:49:54.355327 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.355343 kubelet[2714]: W0813 00:49:54.355339 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.355718 kubelet[2714]: E0813 00:49:54.355365 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.355718 kubelet[2714]: E0813 00:49:54.355715 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.355718 kubelet[2714]: W0813 00:49:54.355723 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.355936 kubelet[2714]: E0813 00:49:54.355748 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.355997 kubelet[2714]: E0813 00:49:54.355981 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.355997 kubelet[2714]: W0813 00:49:54.355992 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.356090 kubelet[2714]: E0813 00:49:54.356074 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.356253 kubelet[2714]: E0813 00:49:54.356236 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.356253 kubelet[2714]: W0813 00:49:54.356248 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.356364 kubelet[2714]: E0813 00:49:54.356339 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.356594 kubelet[2714]: E0813 00:49:54.356477 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.356594 kubelet[2714]: W0813 00:49:54.356487 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.356594 kubelet[2714]: E0813 00:49:54.356513 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.356736 kubelet[2714]: E0813 00:49:54.356724 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.356862 kubelet[2714]: W0813 00:49:54.356781 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.356862 kubelet[2714]: E0813 00:49:54.356807 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.356978 kubelet[2714]: E0813 00:49:54.356967 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.357111 kubelet[2714]: W0813 00:49:54.357012 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.357111 kubelet[2714]: E0813 00:49:54.357036 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.357230 kubelet[2714]: E0813 00:49:54.357219 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.357433 kubelet[2714]: W0813 00:49:54.357275 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.357433 kubelet[2714]: E0813 00:49:54.357297 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.357600 kubelet[2714]: E0813 00:49:54.357584 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.357600 kubelet[2714]: W0813 00:49:54.357601 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.357668 kubelet[2714]: E0813 00:49:54.357616 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.357828 kubelet[2714]: E0813 00:49:54.357813 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.357828 kubelet[2714]: W0813 00:49:54.357825 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.357885 kubelet[2714]: E0813 00:49:54.357849 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.358068 kubelet[2714]: E0813 00:49:54.358052 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.358068 kubelet[2714]: W0813 00:49:54.358065 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.358112 kubelet[2714]: E0813 00:49:54.358074 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.358480 kubelet[2714]: E0813 00:49:54.358465 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.358480 kubelet[2714]: W0813 00:49:54.358476 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.358578 kubelet[2714]: E0813 00:49:54.358491 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.358743 kubelet[2714]: E0813 00:49:54.358728 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.358743 kubelet[2714]: W0813 00:49:54.358739 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.358787 kubelet[2714]: E0813 00:49:54.358761 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.358953 kubelet[2714]: E0813 00:49:54.358939 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.358953 kubelet[2714]: W0813 00:49:54.358950 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.359007 kubelet[2714]: E0813 00:49:54.358957 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.359150 kubelet[2714]: E0813 00:49:54.359136 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.359150 kubelet[2714]: W0813 00:49:54.359147 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.359193 kubelet[2714]: E0813 00:49:54.359155 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.367587 kubelet[2714]: E0813 00:49:54.367559 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:54.367587 kubelet[2714]: W0813 00:49:54.367575 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:54.367587 kubelet[2714]: E0813 00:49:54.367585 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:54.471884 update_engine[1533]: I20250813 00:49:54.471762 1533 update_attempter.cc:509] Updating boot flags... Aug 13 00:49:54.947801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362148305.mount: Deactivated successfully. Aug 13 00:49:55.553572 containerd[1575]: time="2025-08-13T00:49:55.553511659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:55.554371 containerd[1575]: time="2025-08-13T00:49:55.554279749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 00:49:55.554904 containerd[1575]: time="2025-08-13T00:49:55.554879833Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:55.556825 containerd[1575]: time="2025-08-13T00:49:55.556804893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:55.557549 containerd[1575]: time="2025-08-13T00:49:55.557207213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.521283198s" Aug 13 00:49:55.557549 containerd[1575]: time="2025-08-13T00:49:55.557237632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:49:55.558168 containerd[1575]: time="2025-08-13T00:49:55.558151588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:49:55.572834 containerd[1575]: time="2025-08-13T00:49:55.572767668Z" level=info msg="CreateContainer within sandbox \"f4dc011f92297f78e36f94b7dbb7843b8ccbeba56b97dae72518dd4bf406ab01\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:49:55.577812 containerd[1575]: time="2025-08-13T00:49:55.577784077Z" level=info msg="Container 64f5ed96cd6435d0387a9156d45c9fa77f4173f26fd1edad24a57ea2cc5a8028: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:55.599509 containerd[1575]: time="2025-08-13T00:49:55.599463124Z" level=info msg="CreateContainer within sandbox \"f4dc011f92297f78e36f94b7dbb7843b8ccbeba56b97dae72518dd4bf406ab01\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"64f5ed96cd6435d0387a9156d45c9fa77f4173f26fd1edad24a57ea2cc5a8028\"" Aug 13 00:49:55.600410 containerd[1575]: time="2025-08-13T00:49:55.600371660Z" level=info msg="StartContainer for \"64f5ed96cd6435d0387a9156d45c9fa77f4173f26fd1edad24a57ea2cc5a8028\"" Aug 13 00:49:55.601471 containerd[1575]: time="2025-08-13T00:49:55.601428613Z" level=info msg="connecting to shim 64f5ed96cd6435d0387a9156d45c9fa77f4173f26fd1edad24a57ea2cc5a8028" address="unix:///run/containerd/s/0e004117ba161e07c2371b1e8bb2699165ff1ea3269bd8625cad0d19bb5c9a86" protocol=ttrpc version=3 Aug 13 00:49:55.636649 systemd[1]: Started cri-containerd-64f5ed96cd6435d0387a9156d45c9fa77f4173f26fd1edad24a57ea2cc5a8028.scope - libcontainer container 64f5ed96cd6435d0387a9156d45c9fa77f4173f26fd1edad24a57ea2cc5a8028. Aug 13 00:49:55.694941 containerd[1575]: time="2025-08-13T00:49:55.694898962Z" level=info msg="StartContainer for \"64f5ed96cd6435d0387a9156d45c9fa77f4173f26fd1edad24a57ea2cc5a8028\" returns successfully" Aug 13 00:49:55.956112 kubelet[2714]: E0813 00:49:55.956035 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:49:56.025296 kubelet[2714]: E0813 00:49:56.025190 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:56.034943 kubelet[2714]: I0813 00:49:56.034881 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-644589c98-5v7wp" podStartSLOduration=1.512262014 podStartE2EDuration="3.034872436s" podCreationTimestamp="2025-08-13 00:49:53 +0000 UTC" firstStartedPulling="2025-08-13 00:49:54.035258853 +0000 UTC m=+16.197078972" lastFinishedPulling="2025-08-13 00:49:55.557869275 +0000 UTC m=+17.719689394" observedRunningTime="2025-08-13 00:49:56.03471141 +0000 UTC m=+18.196531529" watchObservedRunningTime="2025-08-13 00:49:56.034872436 +0000 UTC m=+18.196692555" Aug 13 00:49:56.040872 kubelet[2714]: E0813 00:49:56.040818 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.040872 kubelet[2714]: W0813 00:49:56.040863 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.041206 kubelet[2714]: E0813 00:49:56.040876 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.041358 kubelet[2714]: E0813 00:49:56.041316 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.041358 kubelet[2714]: W0813 00:49:56.041351 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.041358 kubelet[2714]: E0813 00:49:56.041360 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.041602 kubelet[2714]: E0813 00:49:56.041579 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.041602 kubelet[2714]: W0813 00:49:56.041595 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.041771 kubelet[2714]: E0813 00:49:56.041603 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.041895 kubelet[2714]: E0813 00:49:56.041804 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.041895 kubelet[2714]: W0813 00:49:56.041819 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.041895 kubelet[2714]: E0813 00:49:56.041826 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.042309 kubelet[2714]: E0813 00:49:56.042063 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.042309 kubelet[2714]: W0813 00:49:56.042304 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.042377 kubelet[2714]: E0813 00:49:56.042312 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.042506 kubelet[2714]: E0813 00:49:56.042490 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.042506 kubelet[2714]: W0813 00:49:56.042502 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.042587 kubelet[2714]: E0813 00:49:56.042510 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.042900 kubelet[2714]: E0813 00:49:56.042875 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.042900 kubelet[2714]: W0813 00:49:56.042893 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.043178 kubelet[2714]: E0813 00:49:56.042911 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.043178 kubelet[2714]: E0813 00:49:56.043123 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.043178 kubelet[2714]: W0813 00:49:56.043131 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.043178 kubelet[2714]: E0813 00:49:56.043139 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.043801 kubelet[2714]: E0813 00:49:56.043787 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.043801 kubelet[2714]: W0813 00:49:56.043800 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.043850 kubelet[2714]: E0813 00:49:56.043808 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.044859 kubelet[2714]: E0813 00:49:56.044774 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.044859 kubelet[2714]: W0813 00:49:56.044787 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.044859 kubelet[2714]: E0813 00:49:56.044797 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.045254 kubelet[2714]: E0813 00:49:56.045242 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.045402 kubelet[2714]: W0813 00:49:56.045321 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.045402 kubelet[2714]: E0813 00:49:56.045334 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.045668 kubelet[2714]: E0813 00:49:56.045658 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.045812 kubelet[2714]: W0813 00:49:56.045740 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.045812 kubelet[2714]: E0813 00:49:56.045753 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.046114 kubelet[2714]: E0813 00:49:56.046039 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.046114 kubelet[2714]: W0813 00:49:56.046048 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.046114 kubelet[2714]: E0813 00:49:56.046056 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.046386 kubelet[2714]: E0813 00:49:56.046376 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.046457 kubelet[2714]: W0813 00:49:56.046447 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.046556 kubelet[2714]: E0813 00:49:56.046490 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.046885 kubelet[2714]: E0813 00:49:56.046814 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.046885 kubelet[2714]: W0813 00:49:56.046824 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.046885 kubelet[2714]: E0813 00:49:56.046832 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.069512 kubelet[2714]: E0813 00:49:56.069494 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.069512 kubelet[2714]: W0813 00:49:56.069508 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.069644 kubelet[2714]: E0813 00:49:56.069539 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.069796 kubelet[2714]: E0813 00:49:56.069773 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.069796 kubelet[2714]: W0813 00:49:56.069789 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.069870 kubelet[2714]: E0813 00:49:56.069810 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.070042 kubelet[2714]: E0813 00:49:56.070022 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.070042 kubelet[2714]: W0813 00:49:56.070037 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.070091 kubelet[2714]: E0813 00:49:56.070060 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.070273 kubelet[2714]: E0813 00:49:56.070259 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.070273 kubelet[2714]: W0813 00:49:56.070269 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.070322 kubelet[2714]: E0813 00:49:56.070291 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.070502 kubelet[2714]: E0813 00:49:56.070488 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.070502 kubelet[2714]: W0813 00:49:56.070499 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.070658 kubelet[2714]: E0813 00:49:56.070604 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.070834 kubelet[2714]: E0813 00:49:56.070807 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.070834 kubelet[2714]: W0813 00:49:56.070823 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.070881 kubelet[2714]: E0813 00:49:56.070837 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.071023 kubelet[2714]: E0813 00:49:56.071009 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.071023 kubelet[2714]: W0813 00:49:56.071021 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.071073 kubelet[2714]: E0813 00:49:56.071043 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.071256 kubelet[2714]: E0813 00:49:56.071242 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.071256 kubelet[2714]: W0813 00:49:56.071254 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.071300 kubelet[2714]: E0813 00:49:56.071277 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.071456 kubelet[2714]: E0813 00:49:56.071442 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.071456 kubelet[2714]: W0813 00:49:56.071453 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.071508 kubelet[2714]: E0813 00:49:56.071466 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.071648 kubelet[2714]: E0813 00:49:56.071633 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.071675 kubelet[2714]: W0813 00:49:56.071644 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.071700 kubelet[2714]: E0813 00:49:56.071687 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.071888 kubelet[2714]: E0813 00:49:56.071875 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.071888 kubelet[2714]: W0813 00:49:56.071885 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.071931 kubelet[2714]: E0813 00:49:56.071907 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.072271 kubelet[2714]: E0813 00:49:56.072255 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.072271 kubelet[2714]: W0813 00:49:56.072267 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.072323 kubelet[2714]: E0813 00:49:56.072282 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.072459 kubelet[2714]: E0813 00:49:56.072445 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.072459 kubelet[2714]: W0813 00:49:56.072455 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.072501 kubelet[2714]: E0813 00:49:56.072474 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.072685 kubelet[2714]: E0813 00:49:56.072671 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.072685 kubelet[2714]: W0813 00:49:56.072681 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.072729 kubelet[2714]: E0813 00:49:56.072702 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.072894 kubelet[2714]: E0813 00:49:56.072880 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.072894 kubelet[2714]: W0813 00:49:56.072890 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.072940 kubelet[2714]: E0813 00:49:56.072911 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.073119 kubelet[2714]: E0813 00:49:56.073105 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.073119 kubelet[2714]: W0813 00:49:56.073115 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.073161 kubelet[2714]: E0813 00:49:56.073128 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.073438 kubelet[2714]: E0813 00:49:56.073424 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.073438 kubelet[2714]: W0813 00:49:56.073435 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.073553 kubelet[2714]: E0813 00:49:56.073515 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:56.073672 kubelet[2714]: E0813 00:49:56.073658 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:56.073672 kubelet[2714]: W0813 00:49:56.073668 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:56.073716 kubelet[2714]: E0813 00:49:56.073676 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.026719 kubelet[2714]: I0813 00:49:57.026593 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:49:57.034424 kubelet[2714]: E0813 00:49:57.026938 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:49:57.053164 kubelet[2714]: E0813 00:49:57.053140 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.053164 kubelet[2714]: W0813 00:49:57.053160 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.053277 kubelet[2714]: E0813 00:49:57.053179 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.053455 kubelet[2714]: E0813 00:49:57.053439 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.053455 kubelet[2714]: W0813 00:49:57.053451 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.053511 kubelet[2714]: E0813 00:49:57.053461 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.053675 kubelet[2714]: E0813 00:49:57.053658 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.053675 kubelet[2714]: W0813 00:49:57.053671 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.053740 kubelet[2714]: E0813 00:49:57.053681 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.053855 kubelet[2714]: E0813 00:49:57.053842 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.053855 kubelet[2714]: W0813 00:49:57.053852 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.053918 kubelet[2714]: E0813 00:49:57.053859 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.054017 kubelet[2714]: E0813 00:49:57.054002 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.054017 kubelet[2714]: W0813 00:49:57.054013 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.054073 kubelet[2714]: E0813 00:49:57.054020 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.054221 kubelet[2714]: E0813 00:49:57.054194 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.054221 kubelet[2714]: W0813 00:49:57.054212 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.054344 kubelet[2714]: E0813 00:49:57.054232 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.054427 kubelet[2714]: E0813 00:49:57.054413 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.054427 kubelet[2714]: W0813 00:49:57.054424 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.054489 kubelet[2714]: E0813 00:49:57.054432 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.054699 kubelet[2714]: E0813 00:49:57.054685 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.054699 kubelet[2714]: W0813 00:49:57.054696 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.054765 kubelet[2714]: E0813 00:49:57.054703 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.054909 kubelet[2714]: E0813 00:49:57.054891 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.054909 kubelet[2714]: W0813 00:49:57.054903 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.054978 kubelet[2714]: E0813 00:49:57.054913 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.055076 kubelet[2714]: E0813 00:49:57.055061 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.055076 kubelet[2714]: W0813 00:49:57.055072 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.055124 kubelet[2714]: E0813 00:49:57.055080 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.055225 kubelet[2714]: E0813 00:49:57.055212 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.055225 kubelet[2714]: W0813 00:49:57.055222 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.055271 kubelet[2714]: E0813 00:49:57.055229 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.055374 kubelet[2714]: E0813 00:49:57.055361 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.055374 kubelet[2714]: W0813 00:49:57.055371 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.055413 kubelet[2714]: E0813 00:49:57.055378 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.055562 kubelet[2714]: E0813 00:49:57.055548 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.055562 kubelet[2714]: W0813 00:49:57.055559 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.055619 kubelet[2714]: E0813 00:49:57.055566 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.055719 kubelet[2714]: E0813 00:49:57.055706 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.055719 kubelet[2714]: W0813 00:49:57.055716 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.055765 kubelet[2714]: E0813 00:49:57.055723 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.055869 kubelet[2714]: E0813 00:49:57.055857 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.055869 kubelet[2714]: W0813 00:49:57.055866 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.055869 kubelet[2714]: E0813 00:49:57.055873 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.080346 kubelet[2714]: E0813 00:49:57.080240 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.080346 kubelet[2714]: W0813 00:49:57.080254 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.080346 kubelet[2714]: E0813 00:49:57.080264 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.080582 kubelet[2714]: E0813 00:49:57.080571 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.080737 kubelet[2714]: W0813 00:49:57.080640 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.080737 kubelet[2714]: E0813 00:49:57.080652 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.080986 kubelet[2714]: E0813 00:49:57.080976 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.081372 kubelet[2714]: W0813 00:49:57.081165 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.081372 kubelet[2714]: E0813 00:49:57.081176 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.081563 kubelet[2714]: E0813 00:49:57.081442 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.081563 kubelet[2714]: W0813 00:49:57.081449 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.081563 kubelet[2714]: E0813 00:49:57.081456 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.081786 kubelet[2714]: E0813 00:49:57.081757 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.081786 kubelet[2714]: W0813 00:49:57.081776 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.082021 kubelet[2714]: E0813 00:49:57.081799 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.082210 kubelet[2714]: E0813 00:49:57.082199 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.082355 kubelet[2714]: W0813 00:49:57.082276 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.082355 kubelet[2714]: E0813 00:49:57.082300 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.082629 kubelet[2714]: E0813 00:49:57.082619 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.082709 kubelet[2714]: W0813 00:49:57.082698 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.082798 kubelet[2714]: E0813 00:49:57.082765 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.083561 kubelet[2714]: E0813 00:49:57.083540 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.083704 kubelet[2714]: W0813 00:49:57.083624 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.083704 kubelet[2714]: E0813 00:49:57.083666 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.083849 kubelet[2714]: E0813 00:49:57.083838 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.083976 kubelet[2714]: W0813 00:49:57.083885 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.083976 kubelet[2714]: E0813 00:49:57.083926 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.084147 kubelet[2714]: E0813 00:49:57.084125 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.084147 kubelet[2714]: W0813 00:49:57.084137 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.084760 kubelet[2714]: E0813 00:49:57.084194 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.084760 kubelet[2714]: E0813 00:49:57.084283 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.084760 kubelet[2714]: W0813 00:49:57.084292 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.084760 kubelet[2714]: E0813 00:49:57.084301 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.084760 kubelet[2714]: E0813 00:49:57.084429 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.084760 kubelet[2714]: W0813 00:49:57.084436 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.084760 kubelet[2714]: E0813 00:49:57.084445 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.084760 kubelet[2714]: E0813 00:49:57.084616 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.084760 kubelet[2714]: W0813 00:49:57.084624 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.084760 kubelet[2714]: E0813 00:49:57.084632 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.084942 kubelet[2714]: E0813 00:49:57.084929 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.084942 kubelet[2714]: W0813 00:49:57.084937 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.085186 kubelet[2714]: E0813 00:49:57.084944 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.085853 kubelet[2714]: E0813 00:49:57.085284 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.085853 kubelet[2714]: W0813 00:49:57.085297 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.085853 kubelet[2714]: E0813 00:49:57.085304 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.085853 kubelet[2714]: E0813 00:49:57.085446 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.085853 kubelet[2714]: W0813 00:49:57.085453 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.085853 kubelet[2714]: E0813 00:49:57.085461 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.085853 kubelet[2714]: E0813 00:49:57.085716 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.085853 kubelet[2714]: W0813 00:49:57.085727 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.085853 kubelet[2714]: E0813 00:49:57.085746 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.086410 kubelet[2714]: E0813 00:49:57.085994 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:49:57.086410 kubelet[2714]: W0813 00:49:57.086011 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:49:57.086410 kubelet[2714]: E0813 00:49:57.086043 2714 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:49:57.326285 containerd[1575]: time="2025-08-13T00:49:57.325718791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:57.326677 containerd[1575]: time="2025-08-13T00:49:57.326555972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 00:49:57.327657 containerd[1575]: time="2025-08-13T00:49:57.327629788Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:57.329561 containerd[1575]: time="2025-08-13T00:49:57.329474015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:49:57.330447 containerd[1575]: time="2025-08-13T00:49:57.330404774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.770842703s" Aug 13 00:49:57.330493 containerd[1575]: time="2025-08-13T00:49:57.330446443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:49:57.333640 containerd[1575]: time="2025-08-13T00:49:57.333489164Z" level=info msg="CreateContainer within sandbox \"1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:49:57.342536 containerd[1575]: time="2025-08-13T00:49:57.341868052Z" level=info msg="Container 81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:49:57.349355 containerd[1575]: time="2025-08-13T00:49:57.349334632Z" level=info msg="CreateContainer within sandbox \"1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92\"" Aug 13 00:49:57.350366 containerd[1575]: time="2025-08-13T00:49:57.350192322Z" level=info msg="StartContainer for \"81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92\"" Aug 13 00:49:57.351705 containerd[1575]: time="2025-08-13T00:49:57.351661678Z" level=info msg="connecting to shim 81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92" address="unix:///run/containerd/s/2f64bef4556efd88bdc0bed0d4eac38e4fdfa25b706abb4b5ce44183cd686752" protocol=ttrpc version=3 Aug 13 00:49:57.383676 systemd[1]: Started cri-containerd-81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92.scope - libcontainer container 81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92. Aug 13 00:49:57.425286 containerd[1575]: time="2025-08-13T00:49:57.425199498Z" level=info msg="StartContainer for \"81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92\" returns successfully" Aug 13 00:49:57.440343 systemd[1]: cri-containerd-81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92.scope: Deactivated successfully. Aug 13 00:49:57.444685 containerd[1575]: time="2025-08-13T00:49:57.443535279Z" level=info msg="received exit event container_id:\"81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92\" id:\"81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92\" pid:3443 exited_at:{seconds:1755046197 nanos:440104107}" Aug 13 00:49:57.444685 containerd[1575]: time="2025-08-13T00:49:57.443735194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92\" id:\"81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92\" pid:3443 exited_at:{seconds:1755046197 nanos:440104107}" Aug 13 00:49:57.467858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92-rootfs.mount: Deactivated successfully. Aug 13 00:49:57.956817 kubelet[2714]: E0813 00:49:57.956704 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:49:58.031931 containerd[1575]: time="2025-08-13T00:49:58.031691190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:49:59.955500 kubelet[2714]: E0813 00:49:59.955402 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:50:01.086664 containerd[1575]: time="2025-08-13T00:50:01.086632090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:50:01.088626 containerd[1575]: time="2025-08-13T00:50:01.088596015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 00:50:01.089926 containerd[1575]: time="2025-08-13T00:50:01.089868032Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:50:01.091918 containerd[1575]: time="2025-08-13T00:50:01.091882467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:50:01.092863 containerd[1575]: time="2025-08-13T00:50:01.092838910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.061074672s" Aug 13 00:50:01.092921 containerd[1575]: time="2025-08-13T00:50:01.092863199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:50:01.095054 containerd[1575]: time="2025-08-13T00:50:01.095032621Z" level=info msg="CreateContainer within sandbox \"1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:50:01.108121 containerd[1575]: time="2025-08-13T00:50:01.107102258Z" level=info msg="Container 0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:50:01.114077 containerd[1575]: time="2025-08-13T00:50:01.114045576Z" level=info msg="CreateContainer within sandbox \"1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6\"" Aug 13 00:50:01.115566 containerd[1575]: time="2025-08-13T00:50:01.115538959Z" level=info msg="StartContainer for \"0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6\"" Aug 13 00:50:01.117743 containerd[1575]: time="2025-08-13T00:50:01.117682341Z" level=info msg="connecting to shim 0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6" address="unix:///run/containerd/s/2f64bef4556efd88bdc0bed0d4eac38e4fdfa25b706abb4b5ce44183cd686752" protocol=ttrpc version=3 Aug 13 00:50:01.139635 systemd[1]: Started cri-containerd-0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6.scope - libcontainer container 0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6. Aug 13 00:50:01.181623 containerd[1575]: time="2025-08-13T00:50:01.181575443Z" level=info msg="StartContainer for \"0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6\" returns successfully" Aug 13 00:50:01.603739 containerd[1575]: time="2025-08-13T00:50:01.603683401Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:50:01.606778 systemd[1]: cri-containerd-0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6.scope: Deactivated successfully. Aug 13 00:50:01.607249 systemd[1]: cri-containerd-0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6.scope: Consumed 484ms CPU time, 197.9M memory peak, 171.2M written to disk. Aug 13 00:50:01.608226 containerd[1575]: time="2025-08-13T00:50:01.608189082Z" level=info msg="received exit event container_id:\"0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6\" id:\"0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6\" pid:3502 exited_at:{seconds:1755046201 nanos:608049114}" Aug 13 00:50:01.608735 containerd[1575]: time="2025-08-13T00:50:01.608365149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6\" id:\"0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6\" pid:3502 exited_at:{seconds:1755046201 nanos:608049114}" Aug 13 00:50:01.609044 kubelet[2714]: I0813 00:50:01.608994 2714 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:50:01.639424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6-rootfs.mount: Deactivated successfully. Aug 13 00:50:01.667298 systemd[1]: Created slice kubepods-burstable-podfd8972e4_10f5_4f13_8b21_de07e7f562ab.slice - libcontainer container kubepods-burstable-podfd8972e4_10f5_4f13_8b21_de07e7f562ab.slice. Aug 13 00:50:01.693366 kubelet[2714]: W0813 00:50:01.686988 2714 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-234-199-101" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node '172-234-199-101' and this object Aug 13 00:50:01.693366 kubelet[2714]: E0813 00:50:01.691759 2714 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-234-199-101\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '172-234-199-101' and this object" logger="UnhandledError" Aug 13 00:50:01.700067 systemd[1]: Created slice kubepods-besteffort-pode3b1e034_c37e_4fd7_a5e2_60afa07bbb9d.slice - libcontainer container kubepods-besteffort-pode3b1e034_c37e_4fd7_a5e2_60afa07bbb9d.slice. Aug 13 00:50:01.712006 kubelet[2714]: I0813 00:50:01.711966 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2addd270-7ed2-4caf-9455-ca6a63f6fe8b-tigera-ca-bundle\") pod \"calico-kube-controllers-6d647ccb87-wkv5s\" (UID: \"2addd270-7ed2-4caf-9455-ca6a63f6fe8b\") " pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:01.712006 kubelet[2714]: I0813 00:50:01.712001 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/675d234f-c597-4292-9793-13c374a627ea-whisker-ca-bundle\") pod \"whisker-55684999-d9pds\" (UID: \"675d234f-c597-4292-9793-13c374a627ea\") " pod="calico-system/whisker-55684999-d9pds" Aug 13 00:50:01.712120 kubelet[2714]: I0813 00:50:01.712018 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f496f6ba-b4df-4b60-ad01-59c39bd658a4-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-k82xk\" (UID: \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\") " pod="calico-system/goldmane-58fd7646b9-k82xk" Aug 13 00:50:01.712120 kubelet[2714]: I0813 00:50:01.712033 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f496f6ba-b4df-4b60-ad01-59c39bd658a4-goldmane-key-pair\") pod \"goldmane-58fd7646b9-k82xk\" (UID: \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\") " pod="calico-system/goldmane-58fd7646b9-k82xk" Aug 13 00:50:01.712120 kubelet[2714]: I0813 00:50:01.712047 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/635fd78e-3d10-4a30-9894-3818897e1867-config-volume\") pod \"coredns-7c65d6cfc9-hxx58\" (UID: \"635fd78e-3d10-4a30-9894-3818897e1867\") " pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:01.712120 kubelet[2714]: I0813 00:50:01.712062 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29mj\" (UniqueName: \"kubernetes.io/projected/635fd78e-3d10-4a30-9894-3818897e1867-kube-api-access-k29mj\") pod \"coredns-7c65d6cfc9-hxx58\" (UID: \"635fd78e-3d10-4a30-9894-3818897e1867\") " pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:01.712120 kubelet[2714]: I0813 00:50:01.712076 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f496f6ba-b4df-4b60-ad01-59c39bd658a4-config\") pod \"goldmane-58fd7646b9-k82xk\" (UID: \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\") " pod="calico-system/goldmane-58fd7646b9-k82xk" Aug 13 00:50:01.712235 kubelet[2714]: I0813 00:50:01.712088 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxrbm\" (UniqueName: \"kubernetes.io/projected/f496f6ba-b4df-4b60-ad01-59c39bd658a4-kube-api-access-cxrbm\") pod \"goldmane-58fd7646b9-k82xk\" (UID: \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\") " pod="calico-system/goldmane-58fd7646b9-k82xk" Aug 13 00:50:01.712235 kubelet[2714]: I0813 00:50:01.712102 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwpsm\" (UniqueName: \"kubernetes.io/projected/e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d-kube-api-access-wwpsm\") pod \"calico-apiserver-68b6778d4-qcfg4\" (UID: \"e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d\") " pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" Aug 13 00:50:01.712235 kubelet[2714]: I0813 00:50:01.712115 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dh22\" (UniqueName: \"kubernetes.io/projected/fd8972e4-10f5-4f13-8b21-de07e7f562ab-kube-api-access-4dh22\") pod \"coredns-7c65d6cfc9-dnlsw\" (UID: \"fd8972e4-10f5-4f13-8b21-de07e7f562ab\") " pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:01.712235 kubelet[2714]: I0813 00:50:01.712128 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/675d234f-c597-4292-9793-13c374a627ea-whisker-backend-key-pair\") pod \"whisker-55684999-d9pds\" (UID: \"675d234f-c597-4292-9793-13c374a627ea\") " pod="calico-system/whisker-55684999-d9pds" Aug 13 00:50:01.712235 kubelet[2714]: I0813 00:50:01.712142 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfmx\" (UniqueName: \"kubernetes.io/projected/2addd270-7ed2-4caf-9455-ca6a63f6fe8b-kube-api-access-2rfmx\") pod \"calico-kube-controllers-6d647ccb87-wkv5s\" (UID: \"2addd270-7ed2-4caf-9455-ca6a63f6fe8b\") " pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:01.712342 kubelet[2714]: I0813 00:50:01.712155 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d-calico-apiserver-certs\") pod \"calico-apiserver-68b6778d4-qcfg4\" (UID: \"e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d\") " pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" Aug 13 00:50:01.712342 kubelet[2714]: I0813 00:50:01.712176 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q2ps\" (UniqueName: \"kubernetes.io/projected/675d234f-c597-4292-9793-13c374a627ea-kube-api-access-7q2ps\") pod \"whisker-55684999-d9pds\" (UID: \"675d234f-c597-4292-9793-13c374a627ea\") " pod="calico-system/whisker-55684999-d9pds" Aug 13 00:50:01.712342 kubelet[2714]: I0813 00:50:01.712189 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpgk9\" (UniqueName: \"kubernetes.io/projected/63ecd8bd-9926-44bc-810b-c535231f65ea-kube-api-access-wpgk9\") pod \"calico-apiserver-68b6778d4-dwpjf\" (UID: \"63ecd8bd-9926-44bc-810b-c535231f65ea\") " pod="calico-apiserver/calico-apiserver-68b6778d4-dwpjf" Aug 13 00:50:01.712342 kubelet[2714]: I0813 00:50:01.712204 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd8972e4-10f5-4f13-8b21-de07e7f562ab-config-volume\") pod \"coredns-7c65d6cfc9-dnlsw\" (UID: \"fd8972e4-10f5-4f13-8b21-de07e7f562ab\") " pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:01.712342 kubelet[2714]: I0813 00:50:01.712217 2714 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63ecd8bd-9926-44bc-810b-c535231f65ea-calico-apiserver-certs\") pod \"calico-apiserver-68b6778d4-dwpjf\" (UID: \"63ecd8bd-9926-44bc-810b-c535231f65ea\") " pod="calico-apiserver/calico-apiserver-68b6778d4-dwpjf" Aug 13 00:50:01.713814 systemd[1]: Created slice kubepods-burstable-pod635fd78e_3d10_4a30_9894_3818897e1867.slice - libcontainer container kubepods-burstable-pod635fd78e_3d10_4a30_9894_3818897e1867.slice. Aug 13 00:50:01.721159 systemd[1]: Created slice kubepods-besteffort-pod2addd270_7ed2_4caf_9455_ca6a63f6fe8b.slice - libcontainer container kubepods-besteffort-pod2addd270_7ed2_4caf_9455_ca6a63f6fe8b.slice. Aug 13 00:50:01.728804 systemd[1]: Created slice kubepods-besteffort-podf496f6ba_b4df_4b60_ad01_59c39bd658a4.slice - libcontainer container kubepods-besteffort-podf496f6ba_b4df_4b60_ad01_59c39bd658a4.slice. Aug 13 00:50:01.737212 systemd[1]: Created slice kubepods-besteffort-pod675d234f_c597_4292_9793_13c374a627ea.slice - libcontainer container kubepods-besteffort-pod675d234f_c597_4292_9793_13c374a627ea.slice. Aug 13 00:50:01.743427 systemd[1]: Created slice kubepods-besteffort-pod63ecd8bd_9926_44bc_810b_c535231f65ea.slice - libcontainer container kubepods-besteffort-pod63ecd8bd_9926_44bc_810b_c535231f65ea.slice. Aug 13 00:50:01.960263 systemd[1]: Created slice kubepods-besteffort-pod7697ce71_aa40_4c78_acaa_c59079720a2c.slice - libcontainer container kubepods-besteffort-pod7697ce71_aa40_4c78_acaa_c59079720a2c.slice. Aug 13 00:50:01.962883 containerd[1575]: time="2025-08-13T00:50:01.962846970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:01.992926 kubelet[2714]: E0813 00:50:01.992804 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:01.993646 containerd[1575]: time="2025-08-13T00:50:01.993440420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:02.018159 containerd[1575]: time="2025-08-13T00:50:02.018100343Z" level=error msg="Failed to destroy network for sandbox \"f1b12fb03387b739b00bdaf0c4785a2353c9830109133fc060ead16f05f8ff64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.018563 kubelet[2714]: E0813 00:50:02.018442 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:02.020368 containerd[1575]: time="2025-08-13T00:50:02.020344086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:02.022327 containerd[1575]: time="2025-08-13T00:50:02.022291074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b12fb03387b739b00bdaf0c4785a2353c9830109133fc060ead16f05f8ff64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.022848 kubelet[2714]: E0813 00:50:02.022556 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b12fb03387b739b00bdaf0c4785a2353c9830109133fc060ead16f05f8ff64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.022848 kubelet[2714]: E0813 00:50:02.022607 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b12fb03387b739b00bdaf0c4785a2353c9830109133fc060ead16f05f8ff64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:02.022848 kubelet[2714]: E0813 00:50:02.022623 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b12fb03387b739b00bdaf0c4785a2353c9830109133fc060ead16f05f8ff64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:02.022923 kubelet[2714]: E0813 00:50:02.022669 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1b12fb03387b739b00bdaf0c4785a2353c9830109133fc060ead16f05f8ff64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:50:02.025624 containerd[1575]: time="2025-08-13T00:50:02.025508501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:02.033187 containerd[1575]: time="2025-08-13T00:50:02.033169154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-k82xk,Uid:f496f6ba-b4df-4b60-ad01-59c39bd658a4,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:02.044619 containerd[1575]: time="2025-08-13T00:50:02.044400348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55684999-d9pds,Uid:675d234f-c597-4292-9793-13c374a627ea,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:02.048550 containerd[1575]: time="2025-08-13T00:50:02.047443018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:50:02.128808 containerd[1575]: time="2025-08-13T00:50:02.128592174Z" level=error msg="Failed to destroy network for sandbox \"893a6af6dc99e5caed193a77847ce36d9edc6b22ac961a1edb63ccfb4a56624d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.130902 systemd[1]: run-netns-cni\x2dd4a0d4e4\x2d867f\x2da898\x2d8c3e\x2dbbdf331d8683.mount: Deactivated successfully. Aug 13 00:50:02.133989 containerd[1575]: time="2025-08-13T00:50:02.133949966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"893a6af6dc99e5caed193a77847ce36d9edc6b22ac961a1edb63ccfb4a56624d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.135850 kubelet[2714]: E0813 00:50:02.134124 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"893a6af6dc99e5caed193a77847ce36d9edc6b22ac961a1edb63ccfb4a56624d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.135850 kubelet[2714]: E0813 00:50:02.134163 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"893a6af6dc99e5caed193a77847ce36d9edc6b22ac961a1edb63ccfb4a56624d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:02.135850 kubelet[2714]: E0813 00:50:02.134180 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"893a6af6dc99e5caed193a77847ce36d9edc6b22ac961a1edb63ccfb4a56624d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:02.135981 kubelet[2714]: E0813 00:50:02.134213 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"893a6af6dc99e5caed193a77847ce36d9edc6b22ac961a1edb63ccfb4a56624d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:50:02.148749 containerd[1575]: time="2025-08-13T00:50:02.148711672Z" level=error msg="Failed to destroy network for sandbox \"4ead33af1f140ab17bb2419e09ffde7c9f141f3738246d1105fb655515a32083\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.151483 systemd[1]: run-netns-cni\x2d0572877f\x2d2631\x2d8483\x2dbb4a\x2dcc273f060f67.mount: Deactivated successfully. Aug 13 00:50:02.164018 containerd[1575]: time="2025-08-13T00:50:02.163970199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ead33af1f140ab17bb2419e09ffde7c9f141f3738246d1105fb655515a32083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.164480 kubelet[2714]: E0813 00:50:02.164306 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ead33af1f140ab17bb2419e09ffde7c9f141f3738246d1105fb655515a32083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.164593 kubelet[2714]: E0813 00:50:02.164443 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ead33af1f140ab17bb2419e09ffde7c9f141f3738246d1105fb655515a32083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:02.164593 kubelet[2714]: E0813 00:50:02.164566 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ead33af1f140ab17bb2419e09ffde7c9f141f3738246d1105fb655515a32083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:02.164758 kubelet[2714]: E0813 00:50:02.164705 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ead33af1f140ab17bb2419e09ffde7c9f141f3738246d1105fb655515a32083\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:50:02.175211 containerd[1575]: time="2025-08-13T00:50:02.175117454Z" level=error msg="Failed to destroy network for sandbox \"8f7c431c73e13f38cd1237cdf4f9450a0487b6a865470e551292b711f5d5335c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.177078 containerd[1575]: time="2025-08-13T00:50:02.177040623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f7c431c73e13f38cd1237cdf4f9450a0487b6a865470e551292b711f5d5335c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.179557 kubelet[2714]: E0813 00:50:02.177619 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f7c431c73e13f38cd1237cdf4f9450a0487b6a865470e551292b711f5d5335c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.179557 kubelet[2714]: E0813 00:50:02.177653 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f7c431c73e13f38cd1237cdf4f9450a0487b6a865470e551292b711f5d5335c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:02.179557 kubelet[2714]: E0813 00:50:02.177667 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f7c431c73e13f38cd1237cdf4f9450a0487b6a865470e551292b711f5d5335c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:02.178037 systemd[1]: run-netns-cni\x2dba5e0bdf\x2d888a\x2d82c6\x2d3f4c\x2dbc000613d9e6.mount: Deactivated successfully. Aug 13 00:50:02.179708 kubelet[2714]: E0813 00:50:02.177692 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f7c431c73e13f38cd1237cdf4f9450a0487b6a865470e551292b711f5d5335c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:50:02.186045 containerd[1575]: time="2025-08-13T00:50:02.185999224Z" level=error msg="Failed to destroy network for sandbox \"284bf6813fa6f1d1a465412e3785202a440e42965eeb75e74b64bf194f475ecf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.188272 systemd[1]: run-netns-cni\x2d54a41cd7\x2db9c8\x2d9c2d\x2d974a\x2d52f25309e646.mount: Deactivated successfully. Aug 13 00:50:02.189192 containerd[1575]: time="2025-08-13T00:50:02.189065414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-k82xk,Uid:f496f6ba-b4df-4b60-ad01-59c39bd658a4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"284bf6813fa6f1d1a465412e3785202a440e42965eeb75e74b64bf194f475ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.189422 kubelet[2714]: E0813 00:50:02.189311 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284bf6813fa6f1d1a465412e3785202a440e42965eeb75e74b64bf194f475ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.189502 kubelet[2714]: E0813 00:50:02.189487 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284bf6813fa6f1d1a465412e3785202a440e42965eeb75e74b64bf194f475ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-k82xk" Aug 13 00:50:02.189753 kubelet[2714]: E0813 00:50:02.189598 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284bf6813fa6f1d1a465412e3785202a440e42965eeb75e74b64bf194f475ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-k82xk" Aug 13 00:50:02.189972 kubelet[2714]: E0813 00:50:02.189823 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-k82xk_calico-system(f496f6ba-b4df-4b60-ad01-59c39bd658a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-k82xk_calico-system(f496f6ba-b4df-4b60-ad01-59c39bd658a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"284bf6813fa6f1d1a465412e3785202a440e42965eeb75e74b64bf194f475ecf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-k82xk" podUID="f496f6ba-b4df-4b60-ad01-59c39bd658a4" Aug 13 00:50:02.199934 containerd[1575]: time="2025-08-13T00:50:02.199889794Z" level=error msg="Failed to destroy network for sandbox \"520af4af6a94bec88d10bea5f04c151476ab9ea1d42227c8963bd7e235efb9cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.200721 containerd[1575]: time="2025-08-13T00:50:02.200687181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55684999-d9pds,Uid:675d234f-c597-4292-9793-13c374a627ea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"520af4af6a94bec88d10bea5f04c151476ab9ea1d42227c8963bd7e235efb9cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.200931 kubelet[2714]: E0813 00:50:02.200827 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520af4af6a94bec88d10bea5f04c151476ab9ea1d42227c8963bd7e235efb9cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:02.200931 kubelet[2714]: E0813 00:50:02.200852 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520af4af6a94bec88d10bea5f04c151476ab9ea1d42227c8963bd7e235efb9cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55684999-d9pds" Aug 13 00:50:02.200931 kubelet[2714]: E0813 00:50:02.200865 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520af4af6a94bec88d10bea5f04c151476ab9ea1d42227c8963bd7e235efb9cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55684999-d9pds" Aug 13 00:50:02.201082 kubelet[2714]: E0813 00:50:02.200890 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55684999-d9pds_calico-system(675d234f-c597-4292-9793-13c374a627ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55684999-d9pds_calico-system(675d234f-c597-4292-9793-13c374a627ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"520af4af6a94bec88d10bea5f04c151476ab9ea1d42227c8963bd7e235efb9cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55684999-d9pds" podUID="675d234f-c597-4292-9793-13c374a627ea" Aug 13 00:50:02.833225 kubelet[2714]: E0813 00:50:02.833179 2714 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:50:02.833225 kubelet[2714]: E0813 00:50:02.833216 2714 projected.go:194] Error preparing data for projected volume kube-api-access-wwpsm for pod calico-apiserver/calico-apiserver-68b6778d4-qcfg4: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:50:02.833667 kubelet[2714]: E0813 00:50:02.833286 2714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d-kube-api-access-wwpsm podName:e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d nodeName:}" failed. No retries permitted until 2025-08-13 00:50:03.333267952 +0000 UTC m=+25.495088071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wwpsm" (UniqueName: "kubernetes.io/projected/e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d-kube-api-access-wwpsm") pod "calico-apiserver-68b6778d4-qcfg4" (UID: "e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:50:02.837401 kubelet[2714]: E0813 00:50:02.837372 2714 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:50:02.837401 kubelet[2714]: E0813 00:50:02.837397 2714 projected.go:194] Error preparing data for projected volume kube-api-access-wpgk9 for pod calico-apiserver/calico-apiserver-68b6778d4-dwpjf: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:50:02.837465 kubelet[2714]: E0813 00:50:02.837442 2714 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ecd8bd-9926-44bc-810b-c535231f65ea-kube-api-access-wpgk9 podName:63ecd8bd-9926-44bc-810b-c535231f65ea nodeName:}" failed. No retries permitted until 2025-08-13 00:50:03.337427883 +0000 UTC m=+25.499248002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wpgk9" (UniqueName: "kubernetes.io/projected/63ecd8bd-9926-44bc-810b-c535231f65ea-kube-api-access-wpgk9") pod "calico-apiserver-68b6778d4-dwpjf" (UID: "63ecd8bd-9926-44bc-810b-c535231f65ea") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:50:03.104194 systemd[1]: run-netns-cni\x2db53633a9\x2d411e\x2d5a9e\x2d550b\x2dcc7f4275338f.mount: Deactivated successfully. Aug 13 00:50:03.138802 systemd[1]: Started sshd@7-172.234.199.101:22-103.189.235.176:45922.service - OpenSSH per-connection server daemon (103.189.235.176:45922). Aug 13 00:50:03.509467 containerd[1575]: time="2025-08-13T00:50:03.509248418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6778d4-qcfg4,Uid:e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:50:03.547102 containerd[1575]: time="2025-08-13T00:50:03.547060882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6778d4-dwpjf,Uid:63ecd8bd-9926-44bc-810b-c535231f65ea,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:50:03.591981 containerd[1575]: time="2025-08-13T00:50:03.591857437Z" level=error msg="Failed to destroy network for sandbox \"cbff9739080cd9edf419f45a5d192e0f18376f65f938b1a9ab662d42bd0cb6a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:03.593811 containerd[1575]: time="2025-08-13T00:50:03.593759337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6778d4-qcfg4,Uid:e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbff9739080cd9edf419f45a5d192e0f18376f65f938b1a9ab662d42bd0cb6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:03.594430 kubelet[2714]: E0813 00:50:03.594396 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbff9739080cd9edf419f45a5d192e0f18376f65f938b1a9ab662d42bd0cb6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:03.594647 kubelet[2714]: E0813 00:50:03.594627 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbff9739080cd9edf419f45a5d192e0f18376f65f938b1a9ab662d42bd0cb6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" Aug 13 00:50:03.594791 kubelet[2714]: E0813 00:50:03.594697 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbff9739080cd9edf419f45a5d192e0f18376f65f938b1a9ab662d42bd0cb6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" Aug 13 00:50:03.595338 kubelet[2714]: E0813 00:50:03.595312 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b6778d4-qcfg4_calico-apiserver(e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b6778d4-qcfg4_calico-apiserver(e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbff9739080cd9edf419f45a5d192e0f18376f65f938b1a9ab662d42bd0cb6a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" podUID="e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d" Aug 13 00:50:03.625959 containerd[1575]: time="2025-08-13T00:50:03.625886529Z" level=error msg="Failed to destroy network for sandbox \"0203dfcb0736cf7d46f177a7392aa10308006e2d22ffe9c7010c96c4e9a855bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:03.627537 containerd[1575]: time="2025-08-13T00:50:03.626987462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6778d4-dwpjf,Uid:63ecd8bd-9926-44bc-810b-c535231f65ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0203dfcb0736cf7d46f177a7392aa10308006e2d22ffe9c7010c96c4e9a855bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:03.627639 kubelet[2714]: E0813 00:50:03.627364 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0203dfcb0736cf7d46f177a7392aa10308006e2d22ffe9c7010c96c4e9a855bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:03.627639 kubelet[2714]: E0813 00:50:03.627446 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0203dfcb0736cf7d46f177a7392aa10308006e2d22ffe9c7010c96c4e9a855bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6778d4-dwpjf" Aug 13 00:50:03.627639 kubelet[2714]: E0813 00:50:03.627465 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0203dfcb0736cf7d46f177a7392aa10308006e2d22ffe9c7010c96c4e9a855bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6778d4-dwpjf" Aug 13 00:50:03.627725 kubelet[2714]: E0813 00:50:03.627696 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b6778d4-dwpjf_calico-apiserver(63ecd8bd-9926-44bc-810b-c535231f65ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b6778d4-dwpjf_calico-apiserver(63ecd8bd-9926-44bc-810b-c535231f65ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0203dfcb0736cf7d46f177a7392aa10308006e2d22ffe9c7010c96c4e9a855bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b6778d4-dwpjf" podUID="63ecd8bd-9926-44bc-810b-c535231f65ea" Aug 13 00:50:04.105903 systemd[1]: run-netns-cni\x2d9223efa7\x2debca\x2d1ed7\x2d6b06\x2d95400aab8c83.mount: Deactivated successfully. Aug 13 00:50:04.106282 systemd[1]: run-netns-cni\x2d08a04348\x2d204d\x2d4f32\x2d8b59\x2d0c8792ca1d2f.mount: Deactivated successfully. Aug 13 00:50:04.470602 sshd[3703]: Received disconnect from 103.189.235.176 port 45922:11: Bye Bye [preauth] Aug 13 00:50:04.470602 sshd[3703]: Disconnected from authenticating user root 103.189.235.176 port 45922 [preauth] Aug 13 00:50:04.473444 systemd[1]: sshd@7-172.234.199.101:22-103.189.235.176:45922.service: Deactivated successfully. Aug 13 00:50:04.977365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221944335.mount: Deactivated successfully. Aug 13 00:50:04.978032 containerd[1575]: time="2025-08-13T00:50:04.977962196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 00:50:04.978329 containerd[1575]: time="2025-08-13T00:50:04.977973896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount221944335: write /var/lib/containerd/tmpmounts/containerd-mount221944335/usr/bin/calico-node: no space left on device" Aug 13 00:50:04.978399 kubelet[2714]: E0813 00:50:04.978267 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount221944335: write /var/lib/containerd/tmpmounts/containerd-mount221944335/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 00:50:04.978399 kubelet[2714]: E0813 00:50:04.978310 2714 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount221944335: write /var/lib/containerd/tmpmounts/containerd-mount221944335/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 00:50:04.979513 kubelet[2714]: E0813 00:50:04.978477 2714 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmm4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-x7x94_calico-system(ab709cf9-e61c-420b-90c5-1c0355308621): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount221944335: write /var/lib/containerd/tmpmounts/containerd-mount221944335/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 00:50:04.980689 kubelet[2714]: E0813 00:50:04.980636 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount221944335: write /var/lib/containerd/tmpmounts/containerd-mount221944335/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:50:05.056391 kubelet[2714]: E0813 00:50:05.056354 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:50:08.115398 kubelet[2714]: I0813 00:50:08.115355 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:08.115398 kubelet[2714]: I0813 00:50:08.115405 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:50:08.117602 kubelet[2714]: I0813 00:50:08.117569 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:08.128127 kubelet[2714]: I0813 00:50:08.127793 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:08.128127 kubelet[2714]: I0813 00:50:08.127894 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-68b6778d4-dwpjf","calico-apiserver/calico-apiserver-68b6778d4-qcfg4","calico-system/whisker-55684999-d9pds","calico-system/goldmane-58fd7646b9-k82xk","calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","calico-system/csi-node-driver-mmxc6","tigera-operator/tigera-operator-5bf8dfcb4-jlhrh","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:50:08.133302 kubelet[2714]: I0813 00:50:08.133279 2714 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-68b6778d4-dwpjf" Aug 13 00:50:08.133302 kubelet[2714]: I0813 00:50:08.133303 2714 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-68b6778d4-dwpjf"] Aug 13 00:50:08.154057 kubelet[2714]: I0813 00:50:08.154004 2714 kubelet.go:2306] "Pod admission denied" podUID="d40d9851-19bc-41bb-a984-06f188cb3b17" pod="calico-apiserver/calico-apiserver-68b6778d4-8sxxl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:08.157137 kubelet[2714]: I0813 00:50:08.157004 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63ecd8bd-9926-44bc-810b-c535231f65ea-calico-apiserver-certs\") pod \"63ecd8bd-9926-44bc-810b-c535231f65ea\" (UID: \"63ecd8bd-9926-44bc-810b-c535231f65ea\") " Aug 13 00:50:08.157137 kubelet[2714]: I0813 00:50:08.157063 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpgk9\" (UniqueName: \"kubernetes.io/projected/63ecd8bd-9926-44bc-810b-c535231f65ea-kube-api-access-wpgk9\") pod \"63ecd8bd-9926-44bc-810b-c535231f65ea\" (UID: \"63ecd8bd-9926-44bc-810b-c535231f65ea\") " Aug 13 00:50:08.164675 kubelet[2714]: I0813 00:50:08.164638 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ecd8bd-9926-44bc-810b-c535231f65ea-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "63ecd8bd-9926-44bc-810b-c535231f65ea" (UID: "63ecd8bd-9926-44bc-810b-c535231f65ea"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:50:08.169799 kubelet[2714]: I0813 00:50:08.169762 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ecd8bd-9926-44bc-810b-c535231f65ea-kube-api-access-wpgk9" (OuterVolumeSpecName: "kube-api-access-wpgk9") pod "63ecd8bd-9926-44bc-810b-c535231f65ea" (UID: "63ecd8bd-9926-44bc-810b-c535231f65ea"). InnerVolumeSpecName "kube-api-access-wpgk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:50:08.169883 systemd[1]: var-lib-kubelet-pods-63ecd8bd\x2d9926\x2d44bc\x2d810b\x2dc535231f65ea-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 00:50:08.175711 systemd[1]: var-lib-kubelet-pods-63ecd8bd\x2d9926\x2d44bc\x2d810b\x2dc535231f65ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwpgk9.mount: Deactivated successfully. Aug 13 00:50:08.185969 kubelet[2714]: I0813 00:50:08.185862 2714 kubelet.go:2306] "Pod admission denied" podUID="73318f66-ed6f-4b98-8d00-e99c1ff07db4" pod="calico-apiserver/calico-apiserver-68b6778d4-8tp47" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:08.206444 kubelet[2714]: I0813 00:50:08.206408 2714 kubelet.go:2306] "Pod admission denied" podUID="6c6e2aea-728e-424b-acb5-ce17e68c79ce" pod="calico-apiserver/calico-apiserver-68b6778d4-xjfdk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:08.228540 kubelet[2714]: I0813 00:50:08.228472 2714 kubelet.go:2306] "Pod admission denied" podUID="c76deeb5-5df3-41e5-b251-478e96bbd719" pod="calico-apiserver/calico-apiserver-68b6778d4-dqtlx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:08.250210 kubelet[2714]: I0813 00:50:08.250164 2714 kubelet.go:2306] "Pod admission denied" podUID="44869715-841d-4653-a5a7-ba7b9f3d1456" pod="calico-apiserver/calico-apiserver-68b6778d4-rzrd7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:08.257893 kubelet[2714]: I0813 00:50:08.257862 2714 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63ecd8bd-9926-44bc-810b-c535231f65ea-calico-apiserver-certs\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:08.257893 kubelet[2714]: I0813 00:50:08.257889 2714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpgk9\" (UniqueName: \"kubernetes.io/projected/63ecd8bd-9926-44bc-810b-c535231f65ea-kube-api-access-wpgk9\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:08.280817 kubelet[2714]: I0813 00:50:08.280751 2714 kubelet.go:2306] "Pod admission denied" podUID="0657cee6-f0dc-41fa-8400-58dc2dba59f4" pod="calico-apiserver/calico-apiserver-68b6778d4-8wj9g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:08.305644 kubelet[2714]: I0813 00:50:08.305596 2714 kubelet.go:2306] "Pod admission denied" podUID="5a4384d6-3952-4eba-9d1a-df6014dd9cd7" pod="calico-apiserver/calico-apiserver-68b6778d4-tb2r2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:08.325675 kubelet[2714]: I0813 00:50:08.325629 2714 kubelet.go:2306] "Pod admission denied" podUID="a18f3f53-b3b8-4c67-ba6b-f0a9b82513bc" pod="calico-apiserver/calico-apiserver-68b6778d4-b65k4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:08.348702 kubelet[2714]: I0813 00:50:08.348666 2714 kubelet.go:2306] "Pod admission denied" podUID="4ed4d23a-daf3-4993-8b30-1259d4bfe7c0" pod="calico-apiserver/calico-apiserver-68b6778d4-s2qpf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:08.502960 kubelet[2714]: I0813 00:50:08.502410 2714 kubelet.go:2306] "Pod admission denied" podUID="f7abbebb-e321-49b0-9deb-bb8a2b629c61" pod="calico-apiserver/calico-apiserver-68b6778d4-z26xz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:08.602545 kubelet[2714]: I0813 00:50:08.602451 2714 kubelet.go:2306] "Pod admission denied" podUID="49d3b213-4e6a-48b9-872e-98a04a288d82" pod="calico-apiserver/calico-apiserver-68b6778d4-fx978" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:09.068641 systemd[1]: Removed slice kubepods-besteffort-pod63ecd8bd_9926_44bc_810b_c535231f65ea.slice - libcontainer container kubepods-besteffort-pod63ecd8bd_9926_44bc_810b_c535231f65ea.slice. Aug 13 00:50:09.134185 kubelet[2714]: I0813 00:50:09.134152 2714 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-68b6778d4-dwpjf"] Aug 13 00:50:11.146210 systemd[1]: Started sshd@8-172.234.199.101:22-203.193.147.37:50440.service - OpenSSH per-connection server daemon (203.193.147.37:50440). Aug 13 00:50:12.525604 kubelet[2714]: I0813 00:50:12.525402 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:50:12.526159 kubelet[2714]: E0813 00:50:12.525927 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:13.068428 kubelet[2714]: E0813 00:50:13.068404 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:13.956317 containerd[1575]: time="2025-08-13T00:50:13.956270184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-k82xk,Uid:f496f6ba-b4df-4b60-ad01-59c39bd658a4,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:14.021146 containerd[1575]: time="2025-08-13T00:50:14.021001138Z" level=error msg="Failed to destroy network for sandbox \"659581f8f4ad2a56bcabaf9f16db619646a1edb0957bb24567903c4583bd4afb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:14.023413 systemd[1]: run-netns-cni\x2dea9ec18f\x2dc1b2\x2d5527\x2d6d21\x2de2bd5fd01e78.mount: Deactivated successfully. Aug 13 00:50:14.025394 containerd[1575]: time="2025-08-13T00:50:14.025308015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-k82xk,Uid:f496f6ba-b4df-4b60-ad01-59c39bd658a4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"659581f8f4ad2a56bcabaf9f16db619646a1edb0957bb24567903c4583bd4afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:14.025671 kubelet[2714]: E0813 00:50:14.025603 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659581f8f4ad2a56bcabaf9f16db619646a1edb0957bb24567903c4583bd4afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:14.025918 kubelet[2714]: E0813 00:50:14.025700 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659581f8f4ad2a56bcabaf9f16db619646a1edb0957bb24567903c4583bd4afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-k82xk" Aug 13 00:50:14.025918 kubelet[2714]: E0813 00:50:14.025748 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659581f8f4ad2a56bcabaf9f16db619646a1edb0957bb24567903c4583bd4afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-k82xk" Aug 13 00:50:14.025918 kubelet[2714]: E0813 00:50:14.025788 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-k82xk_calico-system(f496f6ba-b4df-4b60-ad01-59c39bd658a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-k82xk_calico-system(f496f6ba-b4df-4b60-ad01-59c39bd658a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"659581f8f4ad2a56bcabaf9f16db619646a1edb0957bb24567903c4583bd4afb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-k82xk" podUID="f496f6ba-b4df-4b60-ad01-59c39bd658a4" Aug 13 00:50:15.957048 kubelet[2714]: E0813 00:50:15.956324 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:15.957568 containerd[1575]: time="2025-08-13T00:50:15.956585157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6778d4-qcfg4,Uid:e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:50:15.957857 containerd[1575]: time="2025-08-13T00:50:15.957837288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:16.027311 containerd[1575]: time="2025-08-13T00:50:16.027261622Z" level=error msg="Failed to destroy network for sandbox \"ff8ccede46627d3eb2e1edf98b027b94caac4b013444ee22136b6bb95e23ed18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:16.030553 containerd[1575]: time="2025-08-13T00:50:16.029624436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6778d4-qcfg4,Uid:e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8ccede46627d3eb2e1edf98b027b94caac4b013444ee22136b6bb95e23ed18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:16.030878 systemd[1]: run-netns-cni\x2d1e9058ba\x2d9b74\x2df9b7\x2d2b31\x2d9a929855011d.mount: Deactivated successfully. Aug 13 00:50:16.032956 kubelet[2714]: E0813 00:50:16.032282 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8ccede46627d3eb2e1edf98b027b94caac4b013444ee22136b6bb95e23ed18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:16.032956 kubelet[2714]: E0813 00:50:16.032933 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8ccede46627d3eb2e1edf98b027b94caac4b013444ee22136b6bb95e23ed18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" Aug 13 00:50:16.032956 kubelet[2714]: E0813 00:50:16.032956 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8ccede46627d3eb2e1edf98b027b94caac4b013444ee22136b6bb95e23ed18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" Aug 13 00:50:16.033055 kubelet[2714]: E0813 00:50:16.032999 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b6778d4-qcfg4_calico-apiserver(e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b6778d4-qcfg4_calico-apiserver(e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff8ccede46627d3eb2e1edf98b027b94caac4b013444ee22136b6bb95e23ed18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" podUID="e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d" Aug 13 00:50:16.040589 containerd[1575]: time="2025-08-13T00:50:16.040543153Z" level=error msg="Failed to destroy network for sandbox \"5cbe2b0a9842ac867d4220b11ea84a3053b55a5014f4604119390ba5e20386ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:16.041980 containerd[1575]: time="2025-08-13T00:50:16.041944853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbe2b0a9842ac867d4220b11ea84a3053b55a5014f4604119390ba5e20386ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:16.043912 kubelet[2714]: E0813 00:50:16.043574 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbe2b0a9842ac867d4220b11ea84a3053b55a5014f4604119390ba5e20386ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:16.043912 kubelet[2714]: E0813 00:50:16.043622 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbe2b0a9842ac867d4220b11ea84a3053b55a5014f4604119390ba5e20386ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:16.043912 kubelet[2714]: E0813 00:50:16.043649 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cbe2b0a9842ac867d4220b11ea84a3053b55a5014f4604119390ba5e20386ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:16.043912 kubelet[2714]: E0813 00:50:16.043732 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cbe2b0a9842ac867d4220b11ea84a3053b55a5014f4604119390ba5e20386ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:50:16.044129 systemd[1]: run-netns-cni\x2d5d458c4c\x2d241f\x2da735\x2d932c\x2d515f5bcf1219.mount: Deactivated successfully. Aug 13 00:50:16.955406 kubelet[2714]: E0813 00:50:16.955011 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:16.955851 containerd[1575]: time="2025-08-13T00:50:16.955799556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:16.956421 containerd[1575]: time="2025-08-13T00:50:16.956395092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:16.956638 containerd[1575]: time="2025-08-13T00:50:16.956532471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55684999-d9pds,Uid:675d234f-c597-4292-9793-13c374a627ea,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:16.956638 containerd[1575]: time="2025-08-13T00:50:16.956543751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:17.055050 containerd[1575]: time="2025-08-13T00:50:17.055009853Z" level=error msg="Failed to destroy network for sandbox \"7e1e923fc56be7ed0da03d2d887a3687bdfcb4a98e6b750b1ed2b4480cd1c31e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.059957 systemd[1]: run-netns-cni\x2d7e8e129d\x2d919e\x2d7189\x2dc63f\x2d0acb917e47c4.mount: Deactivated successfully. Aug 13 00:50:17.062100 containerd[1575]: time="2025-08-13T00:50:17.061964459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1e923fc56be7ed0da03d2d887a3687bdfcb4a98e6b750b1ed2b4480cd1c31e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.063308 kubelet[2714]: E0813 00:50:17.062930 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1e923fc56be7ed0da03d2d887a3687bdfcb4a98e6b750b1ed2b4480cd1c31e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.063308 kubelet[2714]: E0813 00:50:17.062993 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1e923fc56be7ed0da03d2d887a3687bdfcb4a98e6b750b1ed2b4480cd1c31e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:17.063308 kubelet[2714]: E0813 00:50:17.063021 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1e923fc56be7ed0da03d2d887a3687bdfcb4a98e6b750b1ed2b4480cd1c31e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:17.063308 kubelet[2714]: E0813 00:50:17.063071 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e1e923fc56be7ed0da03d2d887a3687bdfcb4a98e6b750b1ed2b4480cd1c31e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:50:17.085902 containerd[1575]: time="2025-08-13T00:50:17.085866049Z" level=error msg="Failed to destroy network for sandbox \"d44620d7249c9e4c9a61f6f08db12948a0146e3d3eba609f53865dccff42d779\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.086151 containerd[1575]: time="2025-08-13T00:50:17.086113347Z" level=error msg="Failed to destroy network for sandbox \"6d0ea0a0b88b2cde21e57d4c11fab7ed597deeedd3466bd022b6dd999900746d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.088195 containerd[1575]: time="2025-08-13T00:50:17.087323470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55684999-d9pds,Uid:675d234f-c597-4292-9793-13c374a627ea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0ea0a0b88b2cde21e57d4c11fab7ed597deeedd3466bd022b6dd999900746d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.088195 containerd[1575]: time="2025-08-13T00:50:17.087994506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44620d7249c9e4c9a61f6f08db12948a0146e3d3eba609f53865dccff42d779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.088311 kubelet[2714]: E0813 00:50:17.087503 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0ea0a0b88b2cde21e57d4c11fab7ed597deeedd3466bd022b6dd999900746d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.088311 kubelet[2714]: E0813 00:50:17.087730 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0ea0a0b88b2cde21e57d4c11fab7ed597deeedd3466bd022b6dd999900746d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55684999-d9pds" Aug 13 00:50:17.088311 kubelet[2714]: E0813 00:50:17.087752 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0ea0a0b88b2cde21e57d4c11fab7ed597deeedd3466bd022b6dd999900746d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55684999-d9pds" Aug 13 00:50:17.088311 kubelet[2714]: E0813 00:50:17.087959 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55684999-d9pds_calico-system(675d234f-c597-4292-9793-13c374a627ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55684999-d9pds_calico-system(675d234f-c597-4292-9793-13c374a627ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d0ea0a0b88b2cde21e57d4c11fab7ed597deeedd3466bd022b6dd999900746d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55684999-d9pds" podUID="675d234f-c597-4292-9793-13c374a627ea" Aug 13 00:50:17.088311 kubelet[2714]: E0813 00:50:17.088093 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44620d7249c9e4c9a61f6f08db12948a0146e3d3eba609f53865dccff42d779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.088311 kubelet[2714]: E0813 00:50:17.088114 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44620d7249c9e4c9a61f6f08db12948a0146e3d3eba609f53865dccff42d779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:17.088311 kubelet[2714]: E0813 00:50:17.088126 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44620d7249c9e4c9a61f6f08db12948a0146e3d3eba609f53865dccff42d779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:17.088311 kubelet[2714]: E0813 00:50:17.088147 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d44620d7249c9e4c9a61f6f08db12948a0146e3d3eba609f53865dccff42d779\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:50:17.090806 systemd[1]: run-netns-cni\x2dfb8390f7\x2d3f4b\x2d17e7\x2de0d6\x2d15f049b11554.mount: Deactivated successfully. Aug 13 00:50:17.091001 systemd[1]: run-netns-cni\x2d9d96582f\x2d600a\x2da9f9\x2d31bd\x2dabcff26687bc.mount: Deactivated successfully. Aug 13 00:50:17.097352 containerd[1575]: time="2025-08-13T00:50:17.097157428Z" level=error msg="Failed to destroy network for sandbox \"0aa167e6d22f1056f558ac25dfbaacbb698be2407a8967a5a635a297a5f697b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.098107 containerd[1575]: time="2025-08-13T00:50:17.098083362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa167e6d22f1056f558ac25dfbaacbb698be2407a8967a5a635a297a5f697b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.098288 kubelet[2714]: E0813 00:50:17.098250 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa167e6d22f1056f558ac25dfbaacbb698be2407a8967a5a635a297a5f697b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:17.098326 kubelet[2714]: E0813 00:50:17.098312 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa167e6d22f1056f558ac25dfbaacbb698be2407a8967a5a635a297a5f697b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:17.098379 kubelet[2714]: E0813 00:50:17.098327 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa167e6d22f1056f558ac25dfbaacbb698be2407a8967a5a635a297a5f697b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:17.098400 kubelet[2714]: E0813 00:50:17.098375 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aa167e6d22f1056f558ac25dfbaacbb698be2407a8967a5a635a297a5f697b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:50:17.967453 systemd[1]: run-netns-cni\x2d9adba46d\x2d10da\x2d0462\x2d4dab\x2de64c1c200cfb.mount: Deactivated successfully. Aug 13 00:50:18.957127 containerd[1575]: time="2025-08-13T00:50:18.957094311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:50:19.166666 kubelet[2714]: I0813 00:50:19.166634 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:19.166666 kubelet[2714]: I0813 00:50:19.166662 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:50:19.168840 kubelet[2714]: I0813 00:50:19.168826 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:19.177598 kubelet[2714]: I0813 00:50:19.177571 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:19.177665 kubelet[2714]: I0813 00:50:19.177655 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-58fd7646b9-k82xk","calico-system/whisker-55684999-d9pds","calico-apiserver/calico-apiserver-68b6778d4-qcfg4","kube-system/coredns-7c65d6cfc9-dnlsw","kube-system/coredns-7c65d6cfc9-hxx58","calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/csi-node-driver-mmxc6","calico-system/calico-node-x7x94","tigera-operator/tigera-operator-5bf8dfcb4-jlhrh","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:50:19.182508 kubelet[2714]: I0813 00:50:19.182491 2714 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-58fd7646b9-k82xk" Aug 13 00:50:19.182508 kubelet[2714]: I0813 00:50:19.182507 2714 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-58fd7646b9-k82xk"] Aug 13 00:50:19.221369 kubelet[2714]: I0813 00:50:19.220917 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f496f6ba-b4df-4b60-ad01-59c39bd658a4-goldmane-key-pair\") pod \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\" (UID: \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\") " Aug 13 00:50:19.221369 kubelet[2714]: I0813 00:50:19.220951 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f496f6ba-b4df-4b60-ad01-59c39bd658a4-config\") pod \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\" (UID: \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\") " Aug 13 00:50:19.221369 kubelet[2714]: I0813 00:50:19.220977 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f496f6ba-b4df-4b60-ad01-59c39bd658a4-goldmane-ca-bundle\") pod \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\" (UID: \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\") " Aug 13 00:50:19.221369 kubelet[2714]: I0813 00:50:19.220998 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxrbm\" (UniqueName: \"kubernetes.io/projected/f496f6ba-b4df-4b60-ad01-59c39bd658a4-kube-api-access-cxrbm\") pod \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\" (UID: \"f496f6ba-b4df-4b60-ad01-59c39bd658a4\") " Aug 13 00:50:19.223000 kubelet[2714]: I0813 00:50:19.222981 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f496f6ba-b4df-4b60-ad01-59c39bd658a4-config" (OuterVolumeSpecName: "config") pod "f496f6ba-b4df-4b60-ad01-59c39bd658a4" (UID: "f496f6ba-b4df-4b60-ad01-59c39bd658a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:50:19.223333 kubelet[2714]: I0813 00:50:19.223316 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f496f6ba-b4df-4b60-ad01-59c39bd658a4-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "f496f6ba-b4df-4b60-ad01-59c39bd658a4" (UID: "f496f6ba-b4df-4b60-ad01-59c39bd658a4"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:50:19.227414 kubelet[2714]: I0813 00:50:19.227395 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f496f6ba-b4df-4b60-ad01-59c39bd658a4-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "f496f6ba-b4df-4b60-ad01-59c39bd658a4" (UID: "f496f6ba-b4df-4b60-ad01-59c39bd658a4"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:50:19.227584 kubelet[2714]: I0813 00:50:19.227549 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f496f6ba-b4df-4b60-ad01-59c39bd658a4-kube-api-access-cxrbm" (OuterVolumeSpecName: "kube-api-access-cxrbm") pod "f496f6ba-b4df-4b60-ad01-59c39bd658a4" (UID: "f496f6ba-b4df-4b60-ad01-59c39bd658a4"). InnerVolumeSpecName "kube-api-access-cxrbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:50:19.227712 systemd[1]: var-lib-kubelet-pods-f496f6ba\x2db4df\x2d4b60\x2dad01\x2d59c39bd658a4-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:50:19.230795 systemd[1]: var-lib-kubelet-pods-f496f6ba\x2db4df\x2d4b60\x2dad01\x2d59c39bd658a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcxrbm.mount: Deactivated successfully. Aug 13 00:50:19.321794 kubelet[2714]: I0813 00:50:19.321750 2714 reconciler_common.go:293] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f496f6ba-b4df-4b60-ad01-59c39bd658a4-goldmane-ca-bundle\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:19.321794 kubelet[2714]: I0813 00:50:19.321781 2714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxrbm\" (UniqueName: \"kubernetes.io/projected/f496f6ba-b4df-4b60-ad01-59c39bd658a4-kube-api-access-cxrbm\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:19.321794 kubelet[2714]: I0813 00:50:19.321791 2714 reconciler_common.go:293] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f496f6ba-b4df-4b60-ad01-59c39bd658a4-goldmane-key-pair\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:19.321972 kubelet[2714]: I0813 00:50:19.321801 2714 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f496f6ba-b4df-4b60-ad01-59c39bd658a4-config\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:19.967744 systemd[1]: Removed slice kubepods-besteffort-podf496f6ba_b4df_4b60_ad01_59c39bd658a4.slice - libcontainer container kubepods-besteffort-podf496f6ba_b4df_4b60_ad01_59c39bd658a4.slice. Aug 13 00:50:20.183382 kubelet[2714]: I0813 00:50:20.183313 2714 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-58fd7646b9-k82xk"] Aug 13 00:50:20.196667 kubelet[2714]: I0813 00:50:20.196484 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:20.196667 kubelet[2714]: I0813 00:50:20.196613 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:50:20.200236 kubelet[2714]: I0813 00:50:20.200212 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:20.216428 kubelet[2714]: I0813 00:50:20.216404 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:20.216626 kubelet[2714]: I0813 00:50:20.216609 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-55684999-d9pds","calico-apiserver/calico-apiserver-68b6778d4-qcfg4","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-hxx58","calico-system/calico-node-x7x94","calico-system/csi-node-driver-mmxc6","tigera-operator/tigera-operator-5bf8dfcb4-jlhrh","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:50:20.222885 kubelet[2714]: I0813 00:50:20.222784 2714 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-55684999-d9pds" Aug 13 00:50:20.222885 kubelet[2714]: I0813 00:50:20.222801 2714 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-55684999-d9pds"] Aug 13 00:50:20.327415 kubelet[2714]: I0813 00:50:20.327374 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/675d234f-c597-4292-9793-13c374a627ea-whisker-backend-key-pair\") pod \"675d234f-c597-4292-9793-13c374a627ea\" (UID: \"675d234f-c597-4292-9793-13c374a627ea\") " Aug 13 00:50:20.327415 kubelet[2714]: I0813 00:50:20.327412 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/675d234f-c597-4292-9793-13c374a627ea-whisker-ca-bundle\") pod \"675d234f-c597-4292-9793-13c374a627ea\" (UID: \"675d234f-c597-4292-9793-13c374a627ea\") " Aug 13 00:50:20.327634 kubelet[2714]: I0813 00:50:20.327431 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q2ps\" (UniqueName: \"kubernetes.io/projected/675d234f-c597-4292-9793-13c374a627ea-kube-api-access-7q2ps\") pod \"675d234f-c597-4292-9793-13c374a627ea\" (UID: \"675d234f-c597-4292-9793-13c374a627ea\") " Aug 13 00:50:20.328246 kubelet[2714]: I0813 00:50:20.328217 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/675d234f-c597-4292-9793-13c374a627ea-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "675d234f-c597-4292-9793-13c374a627ea" (UID: "675d234f-c597-4292-9793-13c374a627ea"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:50:20.334810 systemd[1]: var-lib-kubelet-pods-675d234f\x2dc597\x2d4292\x2d9793\x2d13c374a627ea-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:50:20.335262 kubelet[2714]: I0813 00:50:20.335124 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675d234f-c597-4292-9793-13c374a627ea-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "675d234f-c597-4292-9793-13c374a627ea" (UID: "675d234f-c597-4292-9793-13c374a627ea"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:50:20.339011 systemd[1]: var-lib-kubelet-pods-675d234f\x2dc597\x2d4292\x2d9793\x2d13c374a627ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7q2ps.mount: Deactivated successfully. Aug 13 00:50:20.339622 kubelet[2714]: I0813 00:50:20.339574 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675d234f-c597-4292-9793-13c374a627ea-kube-api-access-7q2ps" (OuterVolumeSpecName: "kube-api-access-7q2ps") pod "675d234f-c597-4292-9793-13c374a627ea" (UID: "675d234f-c597-4292-9793-13c374a627ea"). InnerVolumeSpecName "kube-api-access-7q2ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:50:20.428442 kubelet[2714]: I0813 00:50:20.428384 2714 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/675d234f-c597-4292-9793-13c374a627ea-whisker-backend-key-pair\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:20.428442 kubelet[2714]: I0813 00:50:20.428414 2714 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/675d234f-c597-4292-9793-13c374a627ea-whisker-ca-bundle\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:20.428442 kubelet[2714]: I0813 00:50:20.428424 2714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q2ps\" (UniqueName: \"kubernetes.io/projected/675d234f-c597-4292-9793-13c374a627ea-kube-api-access-7q2ps\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:20.917539 containerd[1575]: time="2025-08-13T00:50:20.916972763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1608819080: write /var/lib/containerd/tmpmounts/containerd-mount1608819080/usr/bin/calico-node: no space left on device" Aug 13 00:50:20.917539 containerd[1575]: time="2025-08-13T00:50:20.917060372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 00:50:20.917223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608819080.mount: Deactivated successfully. Aug 13 00:50:20.919337 kubelet[2714]: E0813 00:50:20.918192 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1608819080: write /var/lib/containerd/tmpmounts/containerd-mount1608819080/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 00:50:20.919337 kubelet[2714]: E0813 00:50:20.918251 2714 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1608819080: write /var/lib/containerd/tmpmounts/containerd-mount1608819080/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 00:50:20.919467 kubelet[2714]: E0813 00:50:20.918415 2714 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmm4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-x7x94_calico-system(ab709cf9-e61c-420b-90c5-1c0355308621): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1608819080: write /var/lib/containerd/tmpmounts/containerd-mount1608819080/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 00:50:20.919826 kubelet[2714]: E0813 00:50:20.919602 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1608819080: write /var/lib/containerd/tmpmounts/containerd-mount1608819080/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:50:21.088390 systemd[1]: Removed slice kubepods-besteffort-pod675d234f_c597_4292_9793_13c374a627ea.slice - libcontainer container kubepods-besteffort-pod675d234f_c597_4292_9793_13c374a627ea.slice. Aug 13 00:50:21.223966 kubelet[2714]: I0813 00:50:21.223867 2714 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-55684999-d9pds"] Aug 13 00:50:22.722033 sshd-session[3969]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=203.193.147.37 user=root Aug 13 00:50:25.466638 sshd[3766]: PAM: Permission denied for root from 203.193.147.37 Aug 13 00:50:26.916684 sshd[3766]: Connection closed by authenticating user root 203.193.147.37 port 50440 [preauth] Aug 13 00:50:26.920849 systemd[1]: sshd@8-172.234.199.101:22-203.193.147.37:50440.service: Deactivated successfully. Aug 13 00:50:26.955989 containerd[1575]: time="2025-08-13T00:50:26.955890417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6778d4-qcfg4,Uid:e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:50:27.003961 containerd[1575]: time="2025-08-13T00:50:27.003896489Z" level=error msg="Failed to destroy network for sandbox \"0a35360f08d4658fea6547de1f6dc539f975eca0aa13c3a9f3082a9bdae047ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:27.005617 containerd[1575]: time="2025-08-13T00:50:27.005325244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6778d4-qcfg4,Uid:e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a35360f08d4658fea6547de1f6dc539f975eca0aa13c3a9f3082a9bdae047ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:27.006444 systemd[1]: run-netns-cni\x2d1b334013\x2d0a2d\x2d19bb\x2d9ecc\x2d9d1569413ade.mount: Deactivated successfully. Aug 13 00:50:27.006691 kubelet[2714]: E0813 00:50:27.006648 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a35360f08d4658fea6547de1f6dc539f975eca0aa13c3a9f3082a9bdae047ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:27.007022 kubelet[2714]: E0813 00:50:27.006704 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a35360f08d4658fea6547de1f6dc539f975eca0aa13c3a9f3082a9bdae047ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" Aug 13 00:50:27.007022 kubelet[2714]: E0813 00:50:27.006722 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a35360f08d4658fea6547de1f6dc539f975eca0aa13c3a9f3082a9bdae047ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" Aug 13 00:50:27.007022 kubelet[2714]: E0813 00:50:27.006784 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b6778d4-qcfg4_calico-apiserver(e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b6778d4-qcfg4_calico-apiserver(e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a35360f08d4658fea6547de1f6dc539f975eca0aa13c3a9f3082a9bdae047ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" podUID="e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d" Aug 13 00:50:27.188370 systemd[1]: Started sshd@9-172.234.199.101:22-110.77.148.87:59950.service - OpenSSH per-connection server daemon (110.77.148.87:59950). Aug 13 00:50:27.955609 containerd[1575]: time="2025-08-13T00:50:27.955268942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:28.003718 containerd[1575]: time="2025-08-13T00:50:28.003658873Z" level=error msg="Failed to destroy network for sandbox \"0e01d2b9d2587c11209b7fb7fc4b3f55cfd958858d5b29eef5fe1e4043ab1ff3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:28.005966 systemd[1]: run-netns-cni\x2d72dc45e3\x2d44f1\x2de5b3\x2d0f41\x2dbb69f1776cb9.mount: Deactivated successfully. Aug 13 00:50:28.006441 containerd[1575]: time="2025-08-13T00:50:28.006389964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01d2b9d2587c11209b7fb7fc4b3f55cfd958858d5b29eef5fe1e4043ab1ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:28.006775 kubelet[2714]: E0813 00:50:28.006635 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01d2b9d2587c11209b7fb7fc4b3f55cfd958858d5b29eef5fe1e4043ab1ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:28.006775 kubelet[2714]: E0813 00:50:28.006680 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01d2b9d2587c11209b7fb7fc4b3f55cfd958858d5b29eef5fe1e4043ab1ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:28.006775 kubelet[2714]: E0813 00:50:28.006722 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01d2b9d2587c11209b7fb7fc4b3f55cfd958858d5b29eef5fe1e4043ab1ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:28.007064 kubelet[2714]: E0813 00:50:28.006761 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e01d2b9d2587c11209b7fb7fc4b3f55cfd958858d5b29eef5fe1e4043ab1ff3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:50:28.955870 kubelet[2714]: E0813 00:50:28.955837 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:28.956975 containerd[1575]: time="2025-08-13T00:50:28.956936077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:29.012546 containerd[1575]: time="2025-08-13T00:50:29.012451327Z" level=error msg="Failed to destroy network for sandbox \"3b599ba9e947de15051d560de7b8f8b9aa0d1ece0890279bf6954abfe37de0f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:29.015504 systemd[1]: run-netns-cni\x2dbe62e8f5\x2d42bf\x2d2d53\x2dc157\x2df5e260abafea.mount: Deactivated successfully. Aug 13 00:50:29.016266 containerd[1575]: time="2025-08-13T00:50:29.015463088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b599ba9e947de15051d560de7b8f8b9aa0d1ece0890279bf6954abfe37de0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:29.016328 kubelet[2714]: E0813 00:50:29.016116 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b599ba9e947de15051d560de7b8f8b9aa0d1ece0890279bf6954abfe37de0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:29.016328 kubelet[2714]: E0813 00:50:29.016169 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b599ba9e947de15051d560de7b8f8b9aa0d1ece0890279bf6954abfe37de0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:29.016328 kubelet[2714]: E0813 00:50:29.016188 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b599ba9e947de15051d560de7b8f8b9aa0d1ece0890279bf6954abfe37de0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:29.016328 kubelet[2714]: E0813 00:50:29.016222 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b599ba9e947de15051d560de7b8f8b9aa0d1ece0890279bf6954abfe37de0f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:50:29.956960 kubelet[2714]: E0813 00:50:29.956136 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:29.960510 containerd[1575]: time="2025-08-13T00:50:29.956893830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:29.960510 containerd[1575]: time="2025-08-13T00:50:29.956899260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:30.019639 containerd[1575]: time="2025-08-13T00:50:30.019599012Z" level=error msg="Failed to destroy network for sandbox \"82ff05cf832c5d47e426841057eb3160fff854d384bb751fafe8566e264a0fac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:30.022807 systemd[1]: run-netns-cni\x2ddf99895a\x2d1a1a\x2d0fa8\x2d299f\x2d9fede1a1fb62.mount: Deactivated successfully. Aug 13 00:50:30.023281 containerd[1575]: time="2025-08-13T00:50:30.023236752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ff05cf832c5d47e426841057eb3160fff854d384bb751fafe8566e264a0fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:30.024748 kubelet[2714]: E0813 00:50:30.023469 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ff05cf832c5d47e426841057eb3160fff854d384bb751fafe8566e264a0fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:30.024748 kubelet[2714]: E0813 00:50:30.023563 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ff05cf832c5d47e426841057eb3160fff854d384bb751fafe8566e264a0fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:30.024748 kubelet[2714]: E0813 00:50:30.023582 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ff05cf832c5d47e426841057eb3160fff854d384bb751fafe8566e264a0fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:30.024748 kubelet[2714]: E0813 00:50:30.023644 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82ff05cf832c5d47e426841057eb3160fff854d384bb751fafe8566e264a0fac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:50:30.025619 containerd[1575]: time="2025-08-13T00:50:30.025325996Z" level=error msg="Failed to destroy network for sandbox \"fd5e548da92c6049b694bd8770306d3cae814ab0e530da2852c6153a5ebab973\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:30.027250 containerd[1575]: time="2025-08-13T00:50:30.027001272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd5e548da92c6049b694bd8770306d3cae814ab0e530da2852c6153a5ebab973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:30.027340 kubelet[2714]: E0813 00:50:30.027200 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd5e548da92c6049b694bd8770306d3cae814ab0e530da2852c6153a5ebab973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:30.027340 kubelet[2714]: E0813 00:50:30.027248 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd5e548da92c6049b694bd8770306d3cae814ab0e530da2852c6153a5ebab973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:30.027340 kubelet[2714]: E0813 00:50:30.027266 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd5e548da92c6049b694bd8770306d3cae814ab0e530da2852c6153a5ebab973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:30.027340 kubelet[2714]: E0813 00:50:30.027306 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd5e548da92c6049b694bd8770306d3cae814ab0e530da2852c6153a5ebab973\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:50:30.027862 systemd[1]: run-netns-cni\x2d92bcdd3b\x2df0fc\x2d447e\x2df4eb\x2dd5d6326e6345.mount: Deactivated successfully. Aug 13 00:50:31.249452 kubelet[2714]: I0813 00:50:31.249396 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:31.249452 kubelet[2714]: I0813 00:50:31.249424 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:50:31.250826 kubelet[2714]: I0813 00:50:31.250797 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:31.261340 kubelet[2714]: I0813 00:50:31.261314 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:31.261623 kubelet[2714]: I0813 00:50:31.261401 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-68b6778d4-qcfg4","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-hxx58","calico-system/csi-node-driver-mmxc6","calico-system/calico-node-x7x94","tigera-operator/tigera-operator-5bf8dfcb4-jlhrh","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:50:31.266958 kubelet[2714]: I0813 00:50:31.266943 2714 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-68b6778d4-qcfg4" Aug 13 00:50:31.267081 kubelet[2714]: I0813 00:50:31.267042 2714 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-68b6778d4-qcfg4"] Aug 13 00:50:31.291789 kubelet[2714]: I0813 00:50:31.291186 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d-calico-apiserver-certs\") pod \"e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d\" (UID: \"e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d\") " Aug 13 00:50:31.291789 kubelet[2714]: I0813 00:50:31.291221 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwpsm\" (UniqueName: \"kubernetes.io/projected/e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d-kube-api-access-wwpsm\") pod \"e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d\" (UID: \"e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d\") " Aug 13 00:50:31.300788 kubelet[2714]: I0813 00:50:31.300752 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d" (UID: "e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:50:31.300992 systemd[1]: var-lib-kubelet-pods-e3b1e034\x2dc37e\x2d4fd7\x2da5e2\x2d60afa07bbb9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwwpsm.mount: Deactivated successfully. Aug 13 00:50:31.301101 systemd[1]: var-lib-kubelet-pods-e3b1e034\x2dc37e\x2d4fd7\x2da5e2\x2d60afa07bbb9d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 00:50:31.302539 kubelet[2714]: I0813 00:50:31.301379 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d-kube-api-access-wwpsm" (OuterVolumeSpecName: "kube-api-access-wwpsm") pod "e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d" (UID: "e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d"). InnerVolumeSpecName "kube-api-access-wwpsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:50:31.391690 kubelet[2714]: I0813 00:50:31.391647 2714 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d-calico-apiserver-certs\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:31.391690 kubelet[2714]: I0813 00:50:31.391673 2714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwpsm\" (UniqueName: \"kubernetes.io/projected/e3b1e034-c37e-4fd7-a5e2-60afa07bbb9d-kube-api-access-wwpsm\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:31.887432 sshd-session[4105]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=110.77.148.87 user=root Aug 13 00:50:31.956125 kubelet[2714]: E0813 00:50:31.956058 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:50:31.964180 systemd[1]: Removed slice kubepods-besteffort-pode3b1e034_c37e_4fd7_a5e2_60afa07bbb9d.slice - libcontainer container kubepods-besteffort-pode3b1e034_c37e_4fd7_a5e2_60afa07bbb9d.slice. Aug 13 00:50:32.267675 kubelet[2714]: I0813 00:50:32.267561 2714 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-68b6778d4-qcfg4"] Aug 13 00:50:33.338339 sshd[3997]: PAM: Permission denied for root from 110.77.148.87 Aug 13 00:50:34.000799 sshd[3997]: Connection closed by authenticating user root 110.77.148.87 port 59950 [preauth] Aug 13 00:50:34.003320 systemd[1]: sshd@9-172.234.199.101:22-110.77.148.87:59950.service: Deactivated successfully. Aug 13 00:50:40.955594 kubelet[2714]: E0813 00:50:40.955385 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:40.955999 containerd[1575]: time="2025-08-13T00:50:40.955705521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:40.956453 containerd[1575]: time="2025-08-13T00:50:40.956340550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:41.016113 containerd[1575]: time="2025-08-13T00:50:41.016062136Z" level=error msg="Failed to destroy network for sandbox \"f394130d282604528478360c554427d3c76f094150e248df3f7e828fecd69111\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:41.017695 containerd[1575]: time="2025-08-13T00:50:41.017624664Z" level=error msg="Failed to destroy network for sandbox \"165813092eff0232e181481cab4f1c314be9f328dcb2880994cecac00aff0dbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:41.019132 systemd[1]: run-netns-cni\x2dca35367d\x2dbf49\x2d1094\x2d9d96\x2d6a05132da174.mount: Deactivated successfully. Aug 13 00:50:41.019877 containerd[1575]: time="2025-08-13T00:50:41.019405331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"165813092eff0232e181481cab4f1c314be9f328dcb2880994cecac00aff0dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:41.020860 kubelet[2714]: E0813 00:50:41.019849 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165813092eff0232e181481cab4f1c314be9f328dcb2880994cecac00aff0dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:41.020860 kubelet[2714]: E0813 00:50:41.019900 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165813092eff0232e181481cab4f1c314be9f328dcb2880994cecac00aff0dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:41.020860 kubelet[2714]: E0813 00:50:41.019919 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165813092eff0232e181481cab4f1c314be9f328dcb2880994cecac00aff0dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:41.020860 kubelet[2714]: E0813 00:50:41.019954 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"165813092eff0232e181481cab4f1c314be9f328dcb2880994cecac00aff0dbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:50:41.020993 containerd[1575]: time="2025-08-13T00:50:41.020949809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f394130d282604528478360c554427d3c76f094150e248df3f7e828fecd69111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:41.021801 kubelet[2714]: E0813 00:50:41.021611 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f394130d282604528478360c554427d3c76f094150e248df3f7e828fecd69111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:41.021801 kubelet[2714]: E0813 00:50:41.021668 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f394130d282604528478360c554427d3c76f094150e248df3f7e828fecd69111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:41.021801 kubelet[2714]: E0813 00:50:41.021682 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f394130d282604528478360c554427d3c76f094150e248df3f7e828fecd69111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:41.021801 kubelet[2714]: E0813 00:50:41.021714 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f394130d282604528478360c554427d3c76f094150e248df3f7e828fecd69111\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:50:41.024187 systemd[1]: run-netns-cni\x2d6a394c91\x2d1e0d\x2dbb90\x2d4f2c\x2d0f0bc2a99afb.mount: Deactivated successfully. Aug 13 00:50:41.956251 containerd[1575]: time="2025-08-13T00:50:41.956177660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:41.998859 containerd[1575]: time="2025-08-13T00:50:41.998813683Z" level=error msg="Failed to destroy network for sandbox \"3b6041f6fc304eaa510d644700ed8703a9bfbd7d021e3c363ad816f60ccbfd5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:42.000680 containerd[1575]: time="2025-08-13T00:50:42.000377951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b6041f6fc304eaa510d644700ed8703a9bfbd7d021e3c363ad816f60ccbfd5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:42.001358 kubelet[2714]: E0813 00:50:42.001320 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b6041f6fc304eaa510d644700ed8703a9bfbd7d021e3c363ad816f60ccbfd5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:42.001785 kubelet[2714]: E0813 00:50:42.001374 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b6041f6fc304eaa510d644700ed8703a9bfbd7d021e3c363ad816f60ccbfd5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:42.001785 kubelet[2714]: E0813 00:50:42.001391 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b6041f6fc304eaa510d644700ed8703a9bfbd7d021e3c363ad816f60ccbfd5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:42.001785 kubelet[2714]: E0813 00:50:42.001433 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b6041f6fc304eaa510d644700ed8703a9bfbd7d021e3c363ad816f60ccbfd5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:50:42.002369 systemd[1]: run-netns-cni\x2d0677b09b\x2d9cb8\x2d0a41\x2d8575\x2db36443af5700.mount: Deactivated successfully. Aug 13 00:50:42.289891 kubelet[2714]: I0813 00:50:42.289809 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:42.289891 kubelet[2714]: I0813 00:50:42.289840 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:50:42.291641 kubelet[2714]: I0813 00:50:42.291576 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:42.303581 kubelet[2714]: I0813 00:50:42.303556 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:42.303696 kubelet[2714]: I0813 00:50:42.303645 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-dnlsw","kube-system/coredns-7c65d6cfc9-hxx58","calico-system/calico-node-x7x94","calico-system/csi-node-driver-mmxc6","tigera-operator/tigera-operator-5bf8dfcb4-jlhrh","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:50:42.303696 kubelet[2714]: E0813 00:50:42.303667 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:42.303696 kubelet[2714]: E0813 00:50:42.303676 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:42.303696 kubelet[2714]: E0813 00:50:42.303683 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:42.303696 kubelet[2714]: E0813 00:50:42.303689 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:50:42.303696 kubelet[2714]: E0813 00:50:42.303695 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:42.304240 containerd[1575]: time="2025-08-13T00:50:42.304209671Z" level=info msg="StopContainer for \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" with timeout 2 (s)" Aug 13 00:50:42.304753 containerd[1575]: time="2025-08-13T00:50:42.304731580Z" level=info msg="Stop container \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" with signal terminated" Aug 13 00:50:42.320334 systemd[1]: cri-containerd-c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e.scope: Deactivated successfully. Aug 13 00:50:42.320646 systemd[1]: cri-containerd-c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e.scope: Consumed 3.701s CPU time, 84.8M memory peak. Aug 13 00:50:42.324346 containerd[1575]: time="2025-08-13T00:50:42.324300136Z" level=info msg="received exit event container_id:\"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" id:\"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" pid:3038 exited_at:{seconds:1755046242 nanos:324012116}" Aug 13 00:50:42.324500 containerd[1575]: time="2025-08-13T00:50:42.324442966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" id:\"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" pid:3038 exited_at:{seconds:1755046242 nanos:324012116}" Aug 13 00:50:42.344760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e-rootfs.mount: Deactivated successfully. Aug 13 00:50:42.351700 containerd[1575]: time="2025-08-13T00:50:42.351649302Z" level=info msg="StopContainer for \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" returns successfully" Aug 13 00:50:42.352492 containerd[1575]: time="2025-08-13T00:50:42.352429201Z" level=info msg="StopPodSandbox for \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\"" Aug 13 00:50:42.352581 containerd[1575]: time="2025-08-13T00:50:42.352501021Z" level=info msg="Container to stop \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:50:42.359730 systemd[1]: cri-containerd-66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3.scope: Deactivated successfully. Aug 13 00:50:42.361471 containerd[1575]: time="2025-08-13T00:50:42.361434079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" id:\"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" pid:2868 exit_status:137 exited_at:{seconds:1755046242 nanos:361222300}" Aug 13 00:50:42.389729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3-rootfs.mount: Deactivated successfully. Aug 13 00:50:42.391402 containerd[1575]: time="2025-08-13T00:50:42.391197432Z" level=info msg="shim disconnected" id=66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3 namespace=k8s.io Aug 13 00:50:42.392333 containerd[1575]: time="2025-08-13T00:50:42.391610982Z" level=warning msg="cleaning up after shim disconnected" id=66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3 namespace=k8s.io Aug 13 00:50:42.392333 containerd[1575]: time="2025-08-13T00:50:42.391657332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:50:42.404472 containerd[1575]: time="2025-08-13T00:50:42.404428846Z" level=info msg="received exit event sandbox_id:\"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" exit_status:137 exited_at:{seconds:1755046242 nanos:361222300}" Aug 13 00:50:42.404841 containerd[1575]: time="2025-08-13T00:50:42.404811005Z" level=info msg="TearDown network for sandbox \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" successfully" Aug 13 00:50:42.404841 containerd[1575]: time="2025-08-13T00:50:42.404835855Z" level=info msg="StopPodSandbox for \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" returns successfully" Aug 13 00:50:42.408025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3-shm.mount: Deactivated successfully. Aug 13 00:50:42.413445 kubelet[2714]: I0813 00:50:42.413422 2714 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-5bf8dfcb4-jlhrh" Aug 13 00:50:42.413613 kubelet[2714]: I0813 00:50:42.413590 2714 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-5bf8dfcb4-jlhrh"] Aug 13 00:50:42.441765 kubelet[2714]: I0813 00:50:42.441658 2714 kubelet.go:2306] "Pod admission denied" podUID="005dae7f-b19a-41a8-8dd6-90bd139e2cee" pod="tigera-operator/tigera-operator-5bf8dfcb4-bzztx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:42.456030 kubelet[2714]: I0813 00:50:42.455998 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1dcdd14a-c7af-4c1f-8a8f-37db562cb94a-var-lib-calico\") pod \"1dcdd14a-c7af-4c1f-8a8f-37db562cb94a\" (UID: \"1dcdd14a-c7af-4c1f-8a8f-37db562cb94a\") " Aug 13 00:50:42.456137 kubelet[2714]: I0813 00:50:42.456043 2714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh57g\" (UniqueName: \"kubernetes.io/projected/1dcdd14a-c7af-4c1f-8a8f-37db562cb94a-kube-api-access-dh57g\") pod \"1dcdd14a-c7af-4c1f-8a8f-37db562cb94a\" (UID: \"1dcdd14a-c7af-4c1f-8a8f-37db562cb94a\") " Aug 13 00:50:42.456686 kubelet[2714]: I0813 00:50:42.456651 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcdd14a-c7af-4c1f-8a8f-37db562cb94a-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "1dcdd14a-c7af-4c1f-8a8f-37db562cb94a" (UID: "1dcdd14a-c7af-4c1f-8a8f-37db562cb94a"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:50:42.463187 kubelet[2714]: I0813 00:50:42.463125 2714 kubelet.go:2306] "Pod admission denied" podUID="28f09653-bddf-43fa-9475-705e2cab1b88" pod="tigera-operator/tigera-operator-5bf8dfcb4-zhgcf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:42.463687 systemd[1]: var-lib-kubelet-pods-1dcdd14a\x2dc7af\x2d4c1f\x2d8a8f\x2d37db562cb94a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddh57g.mount: Deactivated successfully. Aug 13 00:50:42.466891 kubelet[2714]: I0813 00:50:42.466819 2714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dcdd14a-c7af-4c1f-8a8f-37db562cb94a-kube-api-access-dh57g" (OuterVolumeSpecName: "kube-api-access-dh57g") pod "1dcdd14a-c7af-4c1f-8a8f-37db562cb94a" (UID: "1dcdd14a-c7af-4c1f-8a8f-37db562cb94a"). InnerVolumeSpecName "kube-api-access-dh57g". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:50:42.484085 kubelet[2714]: I0813 00:50:42.483856 2714 kubelet.go:2306] "Pod admission denied" podUID="7b469210-b12a-414d-910d-fb48de296c31" pod="tigera-operator/tigera-operator-5bf8dfcb4-sk4gx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:42.506810 kubelet[2714]: I0813 00:50:42.506780 2714 kubelet.go:2306] "Pod admission denied" podUID="9358f112-2b60-4734-b0c0-1f5e123d826b" pod="tigera-operator/tigera-operator-5bf8dfcb4-jjltj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:42.523298 kubelet[2714]: I0813 00:50:42.523030 2714 kubelet.go:2306] "Pod admission denied" podUID="c61c3496-9a0c-47bd-8ead-3bb0df7d7092" pod="tigera-operator/tigera-operator-5bf8dfcb4-ftr49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:42.542671 kubelet[2714]: I0813 00:50:42.541907 2714 kubelet.go:2306] "Pod admission denied" podUID="29609d9a-f06e-4f4a-aa61-17f1a9cc4762" pod="tigera-operator/tigera-operator-5bf8dfcb4-gwpbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:42.556879 kubelet[2714]: I0813 00:50:42.556847 2714 reconciler_common.go:293] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1dcdd14a-c7af-4c1f-8a8f-37db562cb94a-var-lib-calico\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:42.556879 kubelet[2714]: I0813 00:50:42.556872 2714 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh57g\" (UniqueName: \"kubernetes.io/projected/1dcdd14a-c7af-4c1f-8a8f-37db562cb94a-kube-api-access-dh57g\") on node \"172-234-199-101\" DevicePath \"\"" Aug 13 00:50:42.565289 kubelet[2714]: I0813 00:50:42.565132 2714 kubelet.go:2306] "Pod admission denied" podUID="93b68bad-87aa-476a-bfcb-4db776a93163" pod="tigera-operator/tigera-operator-5bf8dfcb4-qhwkw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:42.580356 kubelet[2714]: I0813 00:50:42.580319 2714 kubelet.go:2306] "Pod admission denied" podUID="e9c434ca-e864-4fb8-a177-fd8986e5b0be" pod="tigera-operator/tigera-operator-5bf8dfcb4-rzlxz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:42.606854 kubelet[2714]: I0813 00:50:42.606823 2714 kubelet.go:2306] "Pod admission denied" podUID="9aa17eb0-5ea8-44dd-a0bb-0a0a56c46914" pod="tigera-operator/tigera-operator-5bf8dfcb4-m4f6n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:42.738005 kubelet[2714]: I0813 00:50:42.737972 2714 kubelet.go:2306] "Pod admission denied" podUID="069df082-f8ed-4026-802b-9ac25801c4d2" pod="tigera-operator/tigera-operator-5bf8dfcb4-b7p24" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:42.956732 containerd[1575]: time="2025-08-13T00:50:42.956647984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:50:42.991602 kubelet[2714]: I0813 00:50:42.991556 2714 kubelet.go:2306] "Pod admission denied" podUID="a33e8c39-de01-41bb-ba04-c283035ee212" pod="tigera-operator/tigera-operator-5bf8dfcb4-5xsj5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:43.118221 kubelet[2714]: I0813 00:50:43.118195 2714 scope.go:117] "RemoveContainer" containerID="c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e" Aug 13 00:50:43.120834 containerd[1575]: time="2025-08-13T00:50:43.120743659Z" level=info msg="RemoveContainer for \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\"" Aug 13 00:50:43.125416 containerd[1575]: time="2025-08-13T00:50:43.125379803Z" level=info msg="RemoveContainer for \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" returns successfully" Aug 13 00:50:43.126022 systemd[1]: Removed slice kubepods-besteffort-pod1dcdd14a_c7af_4c1f_8a8f_37db562cb94a.slice - libcontainer container kubepods-besteffort-pod1dcdd14a_c7af_4c1f_8a8f_37db562cb94a.slice. Aug 13 00:50:43.126117 systemd[1]: kubepods-besteffort-pod1dcdd14a_c7af_4c1f_8a8f_37db562cb94a.slice: Consumed 3.731s CPU time, 85M memory peak. Aug 13 00:50:43.126866 kubelet[2714]: I0813 00:50:43.126850 2714 scope.go:117] "RemoveContainer" containerID="c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e" Aug 13 00:50:43.127285 containerd[1575]: time="2025-08-13T00:50:43.127251191Z" level=error msg="ContainerStatus for \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\": not found" Aug 13 00:50:43.127675 kubelet[2714]: E0813 00:50:43.127645 2714 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\": not found" containerID="c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e" Aug 13 00:50:43.127780 kubelet[2714]: I0813 00:50:43.127672 2714 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e"} err="failed to get container status \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e\": not found" Aug 13 00:50:43.139651 kubelet[2714]: I0813 00:50:43.139494 2714 kubelet.go:2306] "Pod admission denied" podUID="b9bc5230-e93a-44f6-b9dd-1445b0e86fa6" pod="tigera-operator/tigera-operator-5bf8dfcb4-6nckl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:43.288879 kubelet[2714]: I0813 00:50:43.287817 2714 kubelet.go:2306] "Pod admission denied" podUID="0f748348-b238-416b-b841-a5178f14f2bb" pod="tigera-operator/tigera-operator-5bf8dfcb4-t9pgq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:43.414543 kubelet[2714]: I0813 00:50:43.414474 2714 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-5bf8dfcb4-jlhrh"] Aug 13 00:50:43.541145 kubelet[2714]: I0813 00:50:43.540205 2714 kubelet.go:2306] "Pod admission denied" podUID="f2f0e966-3118-4252-9bc2-79ac77bb8f59" pod="tigera-operator/tigera-operator-5bf8dfcb4-dhtf6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:43.688942 kubelet[2714]: I0813 00:50:43.687082 2714 kubelet.go:2306] "Pod admission denied" podUID="55001220-c61b-4cca-ba18-6c2bae180664" pod="tigera-operator/tigera-operator-5bf8dfcb4-rvjvj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:43.843951 kubelet[2714]: I0813 00:50:43.842998 2714 kubelet.go:2306] "Pod admission denied" podUID="b9eda6a8-0cc6-4664-9cc6-81825c473c99" pod="tigera-operator/tigera-operator-5bf8dfcb4-n728n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:43.991105 kubelet[2714]: I0813 00:50:43.991046 2714 kubelet.go:2306] "Pod admission denied" podUID="1195c1e8-b1b2-4b7e-a01d-cfb262cb11c6" pod="tigera-operator/tigera-operator-5bf8dfcb4-vx4w5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:44.141554 kubelet[2714]: I0813 00:50:44.141205 2714 kubelet.go:2306] "Pod admission denied" podUID="53bfbbb6-f9af-4b0c-9d52-602d62da9572" pod="tigera-operator/tigera-operator-5bf8dfcb4-xcvl6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:44.290827 kubelet[2714]: I0813 00:50:44.290708 2714 kubelet.go:2306] "Pod admission denied" podUID="e3d57c4e-f115-4662-9a74-ba107fa8ff22" pod="tigera-operator/tigera-operator-5bf8dfcb4-5frnr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:44.449171 kubelet[2714]: I0813 00:50:44.448707 2714 kubelet.go:2306] "Pod admission denied" podUID="72a2ccac-e65f-4292-97bd-3f6bc5d820e6" pod="tigera-operator/tigera-operator-5bf8dfcb4-qpf4j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:44.595260 kubelet[2714]: I0813 00:50:44.595081 2714 kubelet.go:2306] "Pod admission denied" podUID="7c81a3ff-ef78-4790-aea6-4a787fa536c1" pod="tigera-operator/tigera-operator-5bf8dfcb4-d5rnm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:44.741771 kubelet[2714]: I0813 00:50:44.741181 2714 kubelet.go:2306] "Pod admission denied" podUID="db14ef33-0350-44d3-8f3a-581e693727d2" pod="tigera-operator/tigera-operator-5bf8dfcb4-bc4m6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:44.889996 kubelet[2714]: I0813 00:50:44.889372 2714 kubelet.go:2306] "Pod admission denied" podUID="606e4244-b201-456f-84f9-27bdde791dfa" pod="tigera-operator/tigera-operator-5bf8dfcb4-grz4b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:44.955789 kubelet[2714]: E0813 00:50:44.955746 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:44.956269 containerd[1575]: time="2025-08-13T00:50:44.956244303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:45.022357 containerd[1575]: time="2025-08-13T00:50:45.022273032Z" level=error msg="Failed to destroy network for sandbox \"1cd49d197020a0281ab992e65efb581dd882488beabed8bd70947c4cc7b5276a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:45.025831 systemd[1]: run-netns-cni\x2d7d0c4a80\x2dd9d4\x2def34\x2daead\x2dcfddb58a9c40.mount: Deactivated successfully. Aug 13 00:50:45.026202 containerd[1575]: time="2025-08-13T00:50:45.026149988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd49d197020a0281ab992e65efb581dd882488beabed8bd70947c4cc7b5276a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:45.027807 kubelet[2714]: E0813 00:50:45.027631 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd49d197020a0281ab992e65efb581dd882488beabed8bd70947c4cc7b5276a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:45.027807 kubelet[2714]: E0813 00:50:45.027701 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd49d197020a0281ab992e65efb581dd882488beabed8bd70947c4cc7b5276a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:45.027941 kubelet[2714]: E0813 00:50:45.027891 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd49d197020a0281ab992e65efb581dd882488beabed8bd70947c4cc7b5276a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:45.028060 kubelet[2714]: E0813 00:50:45.027941 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cd49d197020a0281ab992e65efb581dd882488beabed8bd70947c4cc7b5276a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:50:45.043784 kubelet[2714]: I0813 00:50:45.043719 2714 kubelet.go:2306] "Pod admission denied" podUID="d0892fe7-6300-40d5-a0cc-751820b2c1f1" pod="tigera-operator/tigera-operator-5bf8dfcb4-hkqtz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:45.136472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497376889.mount: Deactivated successfully. Aug 13 00:50:45.138800 containerd[1575]: time="2025-08-13T00:50:45.138248633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2497376889: write /var/lib/containerd/tmpmounts/containerd-mount2497376889/usr/bin/calico-node: no space left on device" Aug 13 00:50:45.138800 containerd[1575]: time="2025-08-13T00:50:45.138317973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 00:50:45.138933 kubelet[2714]: E0813 00:50:45.138480 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2497376889: write /var/lib/containerd/tmpmounts/containerd-mount2497376889/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 00:50:45.138933 kubelet[2714]: E0813 00:50:45.138612 2714 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2497376889: write /var/lib/containerd/tmpmounts/containerd-mount2497376889/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 00:50:45.140538 kubelet[2714]: E0813 00:50:45.139052 2714 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmm4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-x7x94_calico-system(ab709cf9-e61c-420b-90c5-1c0355308621): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2497376889: write /var/lib/containerd/tmpmounts/containerd-mount2497376889/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 00:50:45.140788 kubelet[2714]: E0813 00:50:45.140742 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2497376889: write /var/lib/containerd/tmpmounts/containerd-mount2497376889/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:50:45.189263 kubelet[2714]: I0813 00:50:45.189234 2714 kubelet.go:2306] "Pod admission denied" podUID="1ce9b640-4033-4b73-8731-e5926bd0c553" pod="tigera-operator/tigera-operator-5bf8dfcb4-l6ht2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:45.290561 kubelet[2714]: I0813 00:50:45.289991 2714 kubelet.go:2306] "Pod admission denied" podUID="dc77724f-6ccf-44f7-ac1a-6ed6ff8b6aae" pod="tigera-operator/tigera-operator-5bf8dfcb4-qxzn8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:45.388432 kubelet[2714]: I0813 00:50:45.388394 2714 kubelet.go:2306] "Pod admission denied" podUID="ef4574cc-5836-42da-b54a-a2daf0d397d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-gdwmn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:45.489139 kubelet[2714]: I0813 00:50:45.489102 2714 kubelet.go:2306] "Pod admission denied" podUID="bc9a4020-2fe4-4934-9fb8-fab7b3f06c99" pod="tigera-operator/tigera-operator-5bf8dfcb4-mv42l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:45.689849 kubelet[2714]: I0813 00:50:45.689814 2714 kubelet.go:2306] "Pod admission denied" podUID="a558016d-2fa7-486f-b31d-051d31c5bf19" pod="tigera-operator/tigera-operator-5bf8dfcb4-qm7vl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:45.789926 kubelet[2714]: I0813 00:50:45.789884 2714 kubelet.go:2306] "Pod admission denied" podUID="3cb214c2-636f-4922-9edc-2223ee441073" pod="tigera-operator/tigera-operator-5bf8dfcb4-6z58z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:45.839245 kubelet[2714]: I0813 00:50:45.839191 2714 kubelet.go:2306] "Pod admission denied" podUID="f80aaab9-ed98-4bbb-810c-597e569c0922" pod="tigera-operator/tigera-operator-5bf8dfcb4-64s55" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:45.939450 kubelet[2714]: I0813 00:50:45.939409 2714 kubelet.go:2306] "Pod admission denied" podUID="9a7fafd9-43b2-4ff3-9824-3ab403d66e51" pod="tigera-operator/tigera-operator-5bf8dfcb4-ck5qf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:46.139298 kubelet[2714]: I0813 00:50:46.139233 2714 kubelet.go:2306] "Pod admission denied" podUID="50717ec8-e62f-49c8-8d02-2ee5e2baeaf4" pod="tigera-operator/tigera-operator-5bf8dfcb4-bgfgn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:46.239432 kubelet[2714]: I0813 00:50:46.239381 2714 kubelet.go:2306] "Pod admission denied" podUID="6c8a081e-7ffe-49fc-8a98-0c76d7368873" pod="tigera-operator/tigera-operator-5bf8dfcb4-j66rx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:46.339419 kubelet[2714]: I0813 00:50:46.339336 2714 kubelet.go:2306] "Pod admission denied" podUID="8ed2dfe6-c180-4f8f-bf01-3bcf1f27da30" pod="tigera-operator/tigera-operator-5bf8dfcb4-jsdx6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:46.540819 kubelet[2714]: I0813 00:50:46.540685 2714 kubelet.go:2306] "Pod admission denied" podUID="e770510b-bad0-4ce0-bbd0-3bdc6465ed6b" pod="tigera-operator/tigera-operator-5bf8dfcb4-b98sr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:46.639540 kubelet[2714]: I0813 00:50:46.639477 2714 kubelet.go:2306] "Pod admission denied" podUID="39b748ab-c123-4b00-a6d6-800b8e03c42c" pod="tigera-operator/tigera-operator-5bf8dfcb4-l29bf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:46.738841 kubelet[2714]: I0813 00:50:46.738797 2714 kubelet.go:2306] "Pod admission denied" podUID="f04495d9-83f9-402b-98c3-6fb5ac841a61" pod="tigera-operator/tigera-operator-5bf8dfcb4-bqfbv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:46.839295 kubelet[2714]: I0813 00:50:46.839174 2714 kubelet.go:2306] "Pod admission denied" podUID="56e2678f-6f65-40f3-83b3-31841c65c11c" pod="tigera-operator/tigera-operator-5bf8dfcb4-x8wsv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:46.888772 kubelet[2714]: I0813 00:50:46.888732 2714 kubelet.go:2306] "Pod admission denied" podUID="df6764e0-e1cb-4e13-9e31-e4a8f65d5ace" pod="tigera-operator/tigera-operator-5bf8dfcb4-tlt56" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.011454 kubelet[2714]: I0813 00:50:47.011418 2714 kubelet.go:2306] "Pod admission denied" podUID="4e963801-acd6-48a8-8fee-27117c90c46a" pod="tigera-operator/tigera-operator-5bf8dfcb4-qh2zs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.091707 kubelet[2714]: I0813 00:50:47.091054 2714 kubelet.go:2306] "Pod admission denied" podUID="8d81bb2a-268c-45e5-bb8b-599461ec51a3" pod="tigera-operator/tigera-operator-5bf8dfcb4-zhwd7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.187551 kubelet[2714]: I0813 00:50:47.187496 2714 kubelet.go:2306] "Pod admission denied" podUID="7826c6bd-3d2f-4bc5-84ba-675602e9689b" pod="tigera-operator/tigera-operator-5bf8dfcb4-vvgfk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.291279 kubelet[2714]: I0813 00:50:47.291228 2714 kubelet.go:2306] "Pod admission denied" podUID="eb77170a-66cf-423e-8e19-430812cbcfa7" pod="tigera-operator/tigera-operator-5bf8dfcb4-zm7th" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.389893 kubelet[2714]: I0813 00:50:47.389850 2714 kubelet.go:2306] "Pod admission denied" podUID="0b502e81-b6f7-4dc9-84fa-149bf0a5c504" pod="tigera-operator/tigera-operator-5bf8dfcb4-pv6xj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.589011 kubelet[2714]: I0813 00:50:47.588970 2714 kubelet.go:2306] "Pod admission denied" podUID="b88d8c5b-8ee6-42cd-8756-de33f54739d2" pod="tigera-operator/tigera-operator-5bf8dfcb4-tv4nj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.689870 kubelet[2714]: I0813 00:50:47.689634 2714 kubelet.go:2306] "Pod admission denied" podUID="87321993-ad3b-4b76-a3fb-62584fc6c6c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-wk6zc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.737448 kubelet[2714]: I0813 00:50:47.737414 2714 kubelet.go:2306] "Pod admission denied" podUID="4f2203e6-dc86-4730-a193-7067b555564e" pod="tigera-operator/tigera-operator-5bf8dfcb4-rb28s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.841368 kubelet[2714]: I0813 00:50:47.841329 2714 kubelet.go:2306] "Pod admission denied" podUID="68f67f75-ac6e-4b09-9d56-5b8146f0b719" pod="tigera-operator/tigera-operator-5bf8dfcb4-j7bb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.939440 kubelet[2714]: I0813 00:50:47.939348 2714 kubelet.go:2306] "Pod admission denied" podUID="500a20ea-6ad7-494f-b413-457e946d020c" pod="tigera-operator/tigera-operator-5bf8dfcb4-vcxs8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:47.987608 kubelet[2714]: I0813 00:50:47.987125 2714 kubelet.go:2306] "Pod admission denied" podUID="abeac7d2-0748-407f-bbba-6ea46c2c08da" pod="tigera-operator/tigera-operator-5bf8dfcb4-v4sb4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:48.089410 kubelet[2714]: I0813 00:50:48.089373 2714 kubelet.go:2306] "Pod admission denied" podUID="fa5d5481-dbdc-46db-888f-77011e204792" pod="tigera-operator/tigera-operator-5bf8dfcb4-mq9xp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:48.187948 kubelet[2714]: I0813 00:50:48.187901 2714 kubelet.go:2306] "Pod admission denied" podUID="f72f90db-bb22-42b2-b2be-cf047af164ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-55g6m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:48.287599 kubelet[2714]: I0813 00:50:48.287483 2714 kubelet.go:2306] "Pod admission denied" podUID="d129b924-c889-4b32-94aa-0b4b583d67b5" pod="tigera-operator/tigera-operator-5bf8dfcb4-xqvbk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:48.389784 kubelet[2714]: I0813 00:50:48.389738 2714 kubelet.go:2306] "Pod admission denied" podUID="22c4c05a-efae-4f4d-b567-e1e62fb96a4b" pod="tigera-operator/tigera-operator-5bf8dfcb4-5x26x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:48.488568 kubelet[2714]: I0813 00:50:48.488496 2714 kubelet.go:2306] "Pod admission denied" podUID="962d5ec4-6ba1-4f51-a893-49983ba8911b" pod="tigera-operator/tigera-operator-5bf8dfcb4-tbfkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:48.588757 kubelet[2714]: I0813 00:50:48.588415 2714 kubelet.go:2306] "Pod admission denied" podUID="a6ec5917-f5e9-4081-aadf-ac265688e625" pod="tigera-operator/tigera-operator-5bf8dfcb4-6df25" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:48.636994 kubelet[2714]: I0813 00:50:48.636959 2714 kubelet.go:2306] "Pod admission denied" podUID="13e788cc-7605-4f65-93f8-91bef7f5afaf" pod="tigera-operator/tigera-operator-5bf8dfcb4-4szld" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:48.738452 kubelet[2714]: I0813 00:50:48.738416 2714 kubelet.go:2306] "Pod admission denied" podUID="6ea130e1-556d-430e-9dae-c0714d5d4cdc" pod="tigera-operator/tigera-operator-5bf8dfcb4-h2nts" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:48.838623 kubelet[2714]: I0813 00:50:48.838591 2714 kubelet.go:2306] "Pod admission denied" podUID="a7e17008-efca-4a1c-b109-7b6b58d93d01" pod="tigera-operator/tigera-operator-5bf8dfcb4-xtkw7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:48.939482 kubelet[2714]: I0813 00:50:48.939442 2714 kubelet.go:2306] "Pod admission denied" podUID="aa572d92-9fa3-462f-abd6-096baaf75f03" pod="tigera-operator/tigera-operator-5bf8dfcb4-wtlzf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:49.140432 kubelet[2714]: I0813 00:50:49.140387 2714 kubelet.go:2306] "Pod admission denied" podUID="210d568f-0cf0-49f3-98a9-2fdf713c52df" pod="tigera-operator/tigera-operator-5bf8dfcb4-288nx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:49.238288 kubelet[2714]: I0813 00:50:49.237923 2714 kubelet.go:2306] "Pod admission denied" podUID="301fe4b6-0d03-47ae-90fc-100394ff1cd0" pod="tigera-operator/tigera-operator-5bf8dfcb4-8j5dn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:49.337709 kubelet[2714]: I0813 00:50:49.337660 2714 kubelet.go:2306] "Pod admission denied" podUID="784820db-0f51-4695-99f2-0e1621952efe" pod="tigera-operator/tigera-operator-5bf8dfcb4-l56hd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:49.437478 kubelet[2714]: I0813 00:50:49.437444 2714 kubelet.go:2306] "Pod admission denied" podUID="f961fddb-26df-4c6d-b906-3e0e8e6a0794" pod="tigera-operator/tigera-operator-5bf8dfcb4-mkh77" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:49.540914 kubelet[2714]: I0813 00:50:49.540771 2714 kubelet.go:2306] "Pod admission denied" podUID="040dca11-19a4-4542-972e-4e8791115c3d" pod="tigera-operator/tigera-operator-5bf8dfcb4-5d9fx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:49.644863 kubelet[2714]: I0813 00:50:49.644796 2714 kubelet.go:2306] "Pod admission denied" podUID="bc1f7a09-eca4-4dda-8c78-3b77464c9737" pod="tigera-operator/tigera-operator-5bf8dfcb4-5tqr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:49.738887 kubelet[2714]: I0813 00:50:49.738849 2714 kubelet.go:2306] "Pod admission denied" podUID="11fc743f-9757-4c71-b8b3-e80e55664cd1" pod="tigera-operator/tigera-operator-5bf8dfcb4-8shmr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:49.838127 kubelet[2714]: I0813 00:50:49.838028 2714 kubelet.go:2306] "Pod admission denied" podUID="10e28a1c-9587-4783-91f1-8c573fbefe41" pod="tigera-operator/tigera-operator-5bf8dfcb4-ss9gn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:49.938069 kubelet[2714]: I0813 00:50:49.938038 2714 kubelet.go:2306] "Pod admission denied" podUID="16ecc02b-6cc9-4592-904f-494e6a720ba8" pod="tigera-operator/tigera-operator-5bf8dfcb4-8v2xb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:50.038774 kubelet[2714]: I0813 00:50:50.038729 2714 kubelet.go:2306] "Pod admission denied" podUID="a550c984-2325-42d9-bc6e-fc1db99103eb" pod="tigera-operator/tigera-operator-5bf8dfcb4-ssgdp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:50.138549 kubelet[2714]: I0813 00:50:50.138497 2714 kubelet.go:2306] "Pod admission denied" podUID="45cbc1f1-bdb9-4230-8f3e-ba2d83840995" pod="tigera-operator/tigera-operator-5bf8dfcb4-whgh2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:50.340321 kubelet[2714]: I0813 00:50:50.340280 2714 kubelet.go:2306] "Pod admission denied" podUID="6ea81c45-20ce-4465-b157-4e0f09150b97" pod="tigera-operator/tigera-operator-5bf8dfcb4-gq9r5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:50.438492 kubelet[2714]: I0813 00:50:50.438403 2714 kubelet.go:2306] "Pod admission denied" podUID="6f9f071e-acc4-4310-8d8f-0dfce214f64a" pod="tigera-operator/tigera-operator-5bf8dfcb4-lcvf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:50.539457 kubelet[2714]: I0813 00:50:50.539031 2714 kubelet.go:2306] "Pod admission denied" podUID="ef6db2d0-c3fa-4c9f-9de5-889661907be7" pod="tigera-operator/tigera-operator-5bf8dfcb4-7jhzt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:50.637436 kubelet[2714]: I0813 00:50:50.637395 2714 kubelet.go:2306] "Pod admission denied" podUID="6d9816b8-6337-43d4-b0ba-dc3ec68610d6" pod="tigera-operator/tigera-operator-5bf8dfcb4-rdnkl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:50.737401 kubelet[2714]: I0813 00:50:50.737138 2714 kubelet.go:2306] "Pod admission denied" podUID="91799a4e-a024-496f-bc0b-6982578a7f26" pod="tigera-operator/tigera-operator-5bf8dfcb4-twjfh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:50.838230 kubelet[2714]: I0813 00:50:50.838191 2714 kubelet.go:2306] "Pod admission denied" podUID="683093f5-4ac9-44b7-ac09-38d895f47ad0" pod="tigera-operator/tigera-operator-5bf8dfcb4-rwl8w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:50.937375 kubelet[2714]: I0813 00:50:50.937338 2714 kubelet.go:2306] "Pod admission denied" podUID="e09d1838-fd52-4628-bb81-e24d356be38c" pod="tigera-operator/tigera-operator-5bf8dfcb4-sr5dn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.038329 kubelet[2714]: I0813 00:50:51.037798 2714 kubelet.go:2306] "Pod admission denied" podUID="46c0729e-ca67-4082-bcd1-35cd86a880b1" pod="tigera-operator/tigera-operator-5bf8dfcb4-ngfwr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.138566 kubelet[2714]: I0813 00:50:51.138489 2714 kubelet.go:2306] "Pod admission denied" podUID="94dff833-a324-425f-bb51-5f6662d17fb7" pod="tigera-operator/tigera-operator-5bf8dfcb4-b544l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.238797 kubelet[2714]: I0813 00:50:51.238760 2714 kubelet.go:2306] "Pod admission denied" podUID="3c36497b-a461-48b0-8c80-82c0a74d00c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-cv2nk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.338103 kubelet[2714]: I0813 00:50:51.337998 2714 kubelet.go:2306] "Pod admission denied" podUID="4be640a6-17ff-4725-be4c-3521d547ea11" pod="tigera-operator/tigera-operator-5bf8dfcb4-c7msq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.439612 kubelet[2714]: I0813 00:50:51.439327 2714 kubelet.go:2306] "Pod admission denied" podUID="8b4bacb1-62ff-4d94-a6bc-6aaf1393efd1" pod="tigera-operator/tigera-operator-5bf8dfcb4-248hj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.537231 kubelet[2714]: I0813 00:50:51.537194 2714 kubelet.go:2306] "Pod admission denied" podUID="b730b6ef-bc3d-4a2e-bf56-dd694c0718ee" pod="tigera-operator/tigera-operator-5bf8dfcb4-dgv8b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.639888 kubelet[2714]: I0813 00:50:51.639828 2714 kubelet.go:2306] "Pod admission denied" podUID="3c2e69e8-8b9e-494e-9abc-05cf2a7e3c3f" pod="tigera-operator/tigera-operator-5bf8dfcb4-2f655" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.693931 kubelet[2714]: I0813 00:50:51.692787 2714 kubelet.go:2306] "Pod admission denied" podUID="f2276bda-556a-4bb2-aa7b-5f065deebfa1" pod="tigera-operator/tigera-operator-5bf8dfcb4-fgsxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.789012 kubelet[2714]: I0813 00:50:51.788979 2714 kubelet.go:2306] "Pod admission denied" podUID="19d7232e-06ba-4a04-ade4-0d861c5670df" pod="tigera-operator/tigera-operator-5bf8dfcb4-8d5c5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.889964 kubelet[2714]: I0813 00:50:51.889832 2714 kubelet.go:2306] "Pod admission denied" podUID="8b5ce5ec-37ae-4487-8007-56f9e144761d" pod="tigera-operator/tigera-operator-5bf8dfcb4-lk4l4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:51.988295 kubelet[2714]: I0813 00:50:51.988236 2714 kubelet.go:2306] "Pod admission denied" podUID="be96f37e-251a-4cca-847e-0ba08504cb10" pod="tigera-operator/tigera-operator-5bf8dfcb4-9qgwn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:52.190716 kubelet[2714]: I0813 00:50:52.190589 2714 kubelet.go:2306] "Pod admission denied" podUID="3dedbc83-e971-411d-bec3-671b08065c69" pod="tigera-operator/tigera-operator-5bf8dfcb4-jxgp4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:52.288854 kubelet[2714]: I0813 00:50:52.288818 2714 kubelet.go:2306] "Pod admission denied" podUID="5e0e2de3-2ac3-47f2-a5be-f4d79f6b767e" pod="tigera-operator/tigera-operator-5bf8dfcb4-4xcbq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:52.388495 kubelet[2714]: I0813 00:50:52.388459 2714 kubelet.go:2306] "Pod admission denied" podUID="abbc8a44-9617-4147-abc1-3ad69c2542c8" pod="tigera-operator/tigera-operator-5bf8dfcb4-p49xr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:52.495617 kubelet[2714]: I0813 00:50:52.495107 2714 kubelet.go:2306] "Pod admission denied" podUID="2289f051-0de4-445d-8b47-6a4e9a1407af" pod="tigera-operator/tigera-operator-5bf8dfcb4-pzdq4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:52.588545 kubelet[2714]: I0813 00:50:52.588468 2714 kubelet.go:2306] "Pod admission denied" podUID="e0418950-0169-4649-bda0-c1d5b6336811" pod="tigera-operator/tigera-operator-5bf8dfcb4-pvg62" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:52.690126 kubelet[2714]: I0813 00:50:52.690079 2714 kubelet.go:2306] "Pod admission denied" podUID="a0101222-0804-430f-8e37-6c7382b97819" pod="tigera-operator/tigera-operator-5bf8dfcb4-c7lz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:52.787988 kubelet[2714]: I0813 00:50:52.787863 2714 kubelet.go:2306] "Pod admission denied" podUID="356a8247-4e4e-4deb-909b-d93186fac76e" pod="tigera-operator/tigera-operator-5bf8dfcb4-vt4l2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:52.891334 kubelet[2714]: I0813 00:50:52.891296 2714 kubelet.go:2306] "Pod admission denied" podUID="541c1edc-e17c-48ea-92db-f65767e37675" pod="tigera-operator/tigera-operator-5bf8dfcb4-l96qq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:52.937773 kubelet[2714]: I0813 00:50:52.937734 2714 kubelet.go:2306] "Pod admission denied" podUID="830cef7c-d3d1-4401-b36e-3d10cdbe9899" pod="tigera-operator/tigera-operator-5bf8dfcb4-rplwp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:53.041393 kubelet[2714]: I0813 00:50:53.041043 2714 kubelet.go:2306] "Pod admission denied" podUID="84f2f33b-5a5c-4170-9357-3abdca806dfe" pod="tigera-operator/tigera-operator-5bf8dfcb4-ljht2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:53.245555 kubelet[2714]: I0813 00:50:53.245465 2714 kubelet.go:2306] "Pod admission denied" podUID="23d19949-5008-4e9d-ad59-06d3b477aaf5" pod="tigera-operator/tigera-operator-5bf8dfcb4-vr472" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:53.340191 kubelet[2714]: I0813 00:50:53.339202 2714 kubelet.go:2306] "Pod admission denied" podUID="5e06ddab-28a2-44ba-a4b0-0050ecad0b40" pod="tigera-operator/tigera-operator-5bf8dfcb4-p5r97" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:53.446212 kubelet[2714]: I0813 00:50:53.446160 2714 kubelet.go:2306] "Pod admission denied" podUID="f9d2abb9-2727-4273-b8f9-993d0164cbca" pod="tigera-operator/tigera-operator-5bf8dfcb4-q9t4q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:53.447481 kubelet[2714]: I0813 00:50:53.447459 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:53.447481 kubelet[2714]: I0813 00:50:53.447481 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:50:53.450359 containerd[1575]: time="2025-08-13T00:50:53.450332676Z" level=info msg="StopPodSandbox for \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\"" Aug 13 00:50:53.450949 containerd[1575]: time="2025-08-13T00:50:53.450879051Z" level=info msg="TearDown network for sandbox \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" successfully" Aug 13 00:50:53.450949 containerd[1575]: time="2025-08-13T00:50:53.450897231Z" level=info msg="StopPodSandbox for \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" returns successfully" Aug 13 00:50:53.452024 containerd[1575]: time="2025-08-13T00:50:53.451927500Z" level=info msg="RemovePodSandbox for \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\"" Aug 13 00:50:53.452024 containerd[1575]: time="2025-08-13T00:50:53.451981981Z" level=info msg="Forcibly stopping sandbox \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\"" Aug 13 00:50:53.452111 containerd[1575]: time="2025-08-13T00:50:53.452067851Z" level=info msg="TearDown network for sandbox \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" successfully" Aug 13 00:50:53.453308 containerd[1575]: time="2025-08-13T00:50:53.453279982Z" level=info msg="Ensure that sandbox 66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3 in task-service has been cleanup successfully" Aug 13 00:50:53.455844 containerd[1575]: time="2025-08-13T00:50:53.455700804Z" level=info msg="RemovePodSandbox \"66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3\" returns successfully" Aug 13 00:50:53.456879 kubelet[2714]: I0813 00:50:53.456854 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:53.466694 kubelet[2714]: I0813 00:50:53.466667 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:53.466810 kubelet[2714]: I0813 00:50:53.466742 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/csi-node-driver-mmxc6","calico-system/calico-node-x7x94","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:50:53.466810 kubelet[2714]: E0813 00:50:53.466767 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:53.466810 kubelet[2714]: E0813 00:50:53.466776 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:53.466810 kubelet[2714]: E0813 00:50:53.466784 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:53.466810 kubelet[2714]: E0813 00:50:53.466790 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:53.466810 kubelet[2714]: E0813 00:50:53.466796 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:50:53.466810 kubelet[2714]: E0813 00:50:53.466806 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:50:53.466810 kubelet[2714]: E0813 00:50:53.466815 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:50:53.467046 kubelet[2714]: E0813 00:50:53.466822 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:50:53.467046 kubelet[2714]: E0813 00:50:53.466830 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:50:53.467046 kubelet[2714]: E0813 00:50:53.466838 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:50:53.467046 kubelet[2714]: I0813 00:50:53.466848 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:50:53.538535 kubelet[2714]: I0813 00:50:53.538455 2714 kubelet.go:2306] "Pod admission denied" podUID="f0ee1e1c-78c9-4e49-a16d-916486198f55" pod="tigera-operator/tigera-operator-5bf8dfcb4-d92hd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:53.637358 kubelet[2714]: I0813 00:50:53.637323 2714 kubelet.go:2306] "Pod admission denied" podUID="b4af53cf-4e2c-42be-9d2f-f3a9dc04c373" pod="tigera-operator/tigera-operator-5bf8dfcb4-hrlc9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:53.737945 kubelet[2714]: I0813 00:50:53.737911 2714 kubelet.go:2306] "Pod admission denied" podUID="352e0a5f-5525-4d6f-bfd5-9947f948907b" pod="tigera-operator/tigera-operator-5bf8dfcb4-7xw98" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:53.843264 kubelet[2714]: I0813 00:50:53.843229 2714 kubelet.go:2306] "Pod admission denied" podUID="72a2809d-abb2-4620-85b8-84a9cfd1b854" pod="tigera-operator/tigera-operator-5bf8dfcb4-2l8fs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:53.939328 kubelet[2714]: I0813 00:50:53.939216 2714 kubelet.go:2306] "Pod admission denied" podUID="5a4bbbb4-586b-44b0-99e2-3853a3151c6c" pod="tigera-operator/tigera-operator-5bf8dfcb4-dlxw9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:53.956001 kubelet[2714]: E0813 00:50:53.955707 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:53.956276 containerd[1575]: time="2025-08-13T00:50:53.956227758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:53.956741 containerd[1575]: time="2025-08-13T00:50:53.956554841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:53.995160 kubelet[2714]: I0813 00:50:53.995113 2714 kubelet.go:2306] "Pod admission denied" podUID="bf37e8a5-cafa-4644-981e-36c77cfe23b8" pod="tigera-operator/tigera-operator-5bf8dfcb4-6wpf8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:54.033567 containerd[1575]: time="2025-08-13T00:50:54.033358621Z" level=error msg="Failed to destroy network for sandbox \"22ac9a0e2ce8705d60f18f052c5440ac3c9738195247d897a26cd6657c5d1557\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:54.036642 containerd[1575]: time="2025-08-13T00:50:54.036570590Z" level=error msg="Failed to destroy network for sandbox \"ee5cb72c9c672418d5f8fc849c2d3405cf0c0a55e7f9232cd09cb69b40738628\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:54.037153 systemd[1]: run-netns-cni\x2de4d1a694\x2d9ccc\x2d8fe8\x2d2465\x2d3dd0ec63a8c0.mount: Deactivated successfully. Aug 13 00:50:54.037917 containerd[1575]: time="2025-08-13T00:50:54.037765760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ac9a0e2ce8705d60f18f052c5440ac3c9738195247d897a26cd6657c5d1557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:54.040783 kubelet[2714]: E0813 00:50:54.040734 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ac9a0e2ce8705d60f18f052c5440ac3c9738195247d897a26cd6657c5d1557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:54.041246 kubelet[2714]: E0813 00:50:54.040870 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ac9a0e2ce8705d60f18f052c5440ac3c9738195247d897a26cd6657c5d1557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:54.041246 kubelet[2714]: E0813 00:50:54.040892 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ac9a0e2ce8705d60f18f052c5440ac3c9738195247d897a26cd6657c5d1557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:50:54.041246 kubelet[2714]: E0813 00:50:54.040941 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22ac9a0e2ce8705d60f18f052c5440ac3c9738195247d897a26cd6657c5d1557\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:50:54.041605 systemd[1]: run-netns-cni\x2d67864a80\x2deb58\x2ddfdd\x2d9ebd\x2da787878c7c33.mount: Deactivated successfully. Aug 13 00:50:54.041996 containerd[1575]: time="2025-08-13T00:50:54.041921896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5cb72c9c672418d5f8fc849c2d3405cf0c0a55e7f9232cd09cb69b40738628\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:54.042341 kubelet[2714]: E0813 00:50:54.042137 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5cb72c9c672418d5f8fc849c2d3405cf0c0a55e7f9232cd09cb69b40738628\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:54.042341 kubelet[2714]: E0813 00:50:54.042299 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5cb72c9c672418d5f8fc849c2d3405cf0c0a55e7f9232cd09cb69b40738628\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:54.042341 kubelet[2714]: E0813 00:50:54.042312 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5cb72c9c672418d5f8fc849c2d3405cf0c0a55e7f9232cd09cb69b40738628\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:50:54.042589 kubelet[2714]: E0813 00:50:54.042563 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee5cb72c9c672418d5f8fc849c2d3405cf0c0a55e7f9232cd09cb69b40738628\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:50:54.089976 kubelet[2714]: I0813 00:50:54.089722 2714 kubelet.go:2306] "Pod admission denied" podUID="6e292569-982a-41db-81fb-ff4fc160af1c" pod="tigera-operator/tigera-operator-5bf8dfcb4-59hs8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:54.188317 kubelet[2714]: I0813 00:50:54.188274 2714 kubelet.go:2306] "Pod admission denied" podUID="1eaa9c36-2ccb-4d23-bfaa-02c0c05e3949" pod="tigera-operator/tigera-operator-5bf8dfcb4-fl7jc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:54.287193 kubelet[2714]: I0813 00:50:54.287099 2714 kubelet.go:2306] "Pod admission denied" podUID="26a749da-586d-4b55-b923-712e8107dd81" pod="tigera-operator/tigera-operator-5bf8dfcb4-8wrhb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:54.391535 kubelet[2714]: I0813 00:50:54.390727 2714 kubelet.go:2306] "Pod admission denied" podUID="bb516202-4e52-41ec-bec7-60dfc3f4d73b" pod="tigera-operator/tigera-operator-5bf8dfcb4-svgjp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:54.487551 kubelet[2714]: I0813 00:50:54.487511 2714 kubelet.go:2306] "Pod admission denied" podUID="56a844a0-9561-4f64-a043-b972a8c6a4c2" pod="tigera-operator/tigera-operator-5bf8dfcb4-87rg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:54.649922 kubelet[2714]: I0813 00:50:54.649881 2714 kubelet.go:2306] "Pod admission denied" podUID="d83078cc-b941-436a-a9d7-634552bcf492" pod="tigera-operator/tigera-operator-5bf8dfcb4-c4fdt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:54.666509 kubelet[2714]: I0813 00:50:54.666469 2714 kubelet.go:2306] "Pod admission denied" podUID="db0449ff-48dd-44e5-82b6-f761c682ae37" pod="tigera-operator/tigera-operator-5bf8dfcb4-wm8wp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:54.740398 kubelet[2714]: I0813 00:50:54.740359 2714 kubelet.go:2306] "Pod admission denied" podUID="9c54b6fb-cebd-41cf-9a7a-a8df17dffcf0" pod="tigera-operator/tigera-operator-5bf8dfcb4-csdkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:54.839459 kubelet[2714]: I0813 00:50:54.839426 2714 kubelet.go:2306] "Pod admission denied" podUID="3939ab28-d14d-40a9-8935-dd680f5daff3" pod="tigera-operator/tigera-operator-5bf8dfcb4-dpdbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:54.938894 kubelet[2714]: I0813 00:50:54.938622 2714 kubelet.go:2306] "Pod admission denied" podUID="8d2c0dde-6466-453a-9295-0830699218ed" pod="tigera-operator/tigera-operator-5bf8dfcb4-cjr2c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:55.039879 kubelet[2714]: I0813 00:50:55.039671 2714 kubelet.go:2306] "Pod admission denied" podUID="198116bb-fb53-4397-8663-8b164d1ba505" pod="tigera-operator/tigera-operator-5bf8dfcb4-b8rcs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:55.140785 kubelet[2714]: I0813 00:50:55.140733 2714 kubelet.go:2306] "Pod admission denied" podUID="23f37c10-5e39-46de-9a80-75cc85e1dbe1" pod="tigera-operator/tigera-operator-5bf8dfcb4-lxvjc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:55.240393 kubelet[2714]: I0813 00:50:55.240073 2714 kubelet.go:2306] "Pod admission denied" podUID="08f04d3d-b6b0-44cb-b6bb-4f5f5686eba9" pod="tigera-operator/tigera-operator-5bf8dfcb4-8twqh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:55.340869 kubelet[2714]: I0813 00:50:55.340830 2714 kubelet.go:2306] "Pod admission denied" podUID="8da26a18-dfd1-4250-a256-e88c0e3977e4" pod="tigera-operator/tigera-operator-5bf8dfcb4-whv8w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:55.540739 kubelet[2714]: I0813 00:50:55.540623 2714 kubelet.go:2306] "Pod admission denied" podUID="8b5ac484-7003-4982-a577-cdd96adf7d90" pod="tigera-operator/tigera-operator-5bf8dfcb4-9v4l9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:55.640789 kubelet[2714]: I0813 00:50:55.640740 2714 kubelet.go:2306] "Pod admission denied" podUID="2aa8d554-b499-4a9d-ac3b-5fa3de83946d" pod="tigera-operator/tigera-operator-5bf8dfcb4-4fsqt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:55.740257 kubelet[2714]: I0813 00:50:55.740206 2714 kubelet.go:2306] "Pod admission denied" podUID="ef553ec5-1ac2-490f-96e7-95db06f1041e" pod="tigera-operator/tigera-operator-5bf8dfcb4-6bgz7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:55.840066 kubelet[2714]: I0813 00:50:55.839954 2714 kubelet.go:2306] "Pod admission denied" podUID="0102080a-90bf-4eb0-882e-36481813a94f" pod="tigera-operator/tigera-operator-5bf8dfcb4-m8tc9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:55.940016 kubelet[2714]: I0813 00:50:55.939960 2714 kubelet.go:2306] "Pod admission denied" podUID="3cb22295-844f-4128-887d-ab9817cafd26" pod="tigera-operator/tigera-operator-5bf8dfcb4-6brf6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:55.956729 containerd[1575]: time="2025-08-13T00:50:55.956557153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:50:55.958484 kubelet[2714]: E0813 00:50:55.956949 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:50:56.014825 containerd[1575]: time="2025-08-13T00:50:56.014778643Z" level=error msg="Failed to destroy network for sandbox \"c951fd7283494bf8245169bc425b59aad8ca8e73ce93699cf637cd005766fbaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:56.016886 systemd[1]: run-netns-cni\x2d86cbf111\x2d7603\x2d7de1\x2d77d9\x2db0751b747ece.mount: Deactivated successfully. Aug 13 00:50:56.017835 containerd[1575]: time="2025-08-13T00:50:56.017330874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c951fd7283494bf8245169bc425b59aad8ca8e73ce93699cf637cd005766fbaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:56.018864 kubelet[2714]: E0813 00:50:56.017592 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c951fd7283494bf8245169bc425b59aad8ca8e73ce93699cf637cd005766fbaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:56.018864 kubelet[2714]: E0813 00:50:56.017643 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c951fd7283494bf8245169bc425b59aad8ca8e73ce93699cf637cd005766fbaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:56.018864 kubelet[2714]: E0813 00:50:56.017661 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c951fd7283494bf8245169bc425b59aad8ca8e73ce93699cf637cd005766fbaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:50:56.018864 kubelet[2714]: E0813 00:50:56.017696 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c951fd7283494bf8245169bc425b59aad8ca8e73ce93699cf637cd005766fbaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:50:56.040719 kubelet[2714]: I0813 00:50:56.040688 2714 kubelet.go:2306] "Pod admission denied" podUID="b1090feb-5533-4e7c-93ee-4e4bc07957e8" pod="tigera-operator/tigera-operator-5bf8dfcb4-ldlkg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:56.140001 kubelet[2714]: I0813 00:50:56.139887 2714 kubelet.go:2306] "Pod admission denied" podUID="87ced021-8671-4d71-8a28-3d49b02c0fd0" pod="tigera-operator/tigera-operator-5bf8dfcb4-prw75" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:56.240304 kubelet[2714]: I0813 00:50:56.240257 2714 kubelet.go:2306] "Pod admission denied" podUID="79b8948e-9dfb-4841-bc38-c24d5869f3a6" pod="tigera-operator/tigera-operator-5bf8dfcb4-lngxf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:56.340205 kubelet[2714]: I0813 00:50:56.339959 2714 kubelet.go:2306] "Pod admission denied" podUID="80951b81-4d2a-4a4f-87b3-5c8a0e2086bf" pod="tigera-operator/tigera-operator-5bf8dfcb4-glv25" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:56.539044 kubelet[2714]: I0813 00:50:56.538884 2714 kubelet.go:2306] "Pod admission denied" podUID="72037ebd-a019-4408-877d-b87d8039406c" pod="tigera-operator/tigera-operator-5bf8dfcb4-grpgz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:56.640634 kubelet[2714]: I0813 00:50:56.640586 2714 kubelet.go:2306] "Pod admission denied" podUID="36bbdd17-c7b5-4d32-8bf9-cbe13f7aff37" pod="tigera-operator/tigera-operator-5bf8dfcb4-4m89h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:56.739604 kubelet[2714]: I0813 00:50:56.739560 2714 kubelet.go:2306] "Pod admission denied" podUID="b1b245a1-9d9b-495a-9804-fee496d12944" pod="tigera-operator/tigera-operator-5bf8dfcb4-n4l6g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:56.941429 kubelet[2714]: I0813 00:50:56.941382 2714 kubelet.go:2306] "Pod admission denied" podUID="4571a4a9-5413-4a31-992c-a48f93b5a561" pod="tigera-operator/tigera-operator-5bf8dfcb4-ddwz6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:57.042408 kubelet[2714]: I0813 00:50:57.042345 2714 kubelet.go:2306] "Pod admission denied" podUID="abfad2c9-744f-41ab-a43f-eaa6a45fbb7b" pod="tigera-operator/tigera-operator-5bf8dfcb4-jh4qk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:57.092472 kubelet[2714]: I0813 00:50:57.092432 2714 kubelet.go:2306] "Pod admission denied" podUID="b71b7ef1-50af-4b7a-97cc-b75354a03856" pod="tigera-operator/tigera-operator-5bf8dfcb4-78t4c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:57.190654 kubelet[2714]: I0813 00:50:57.190605 2714 kubelet.go:2306] "Pod admission denied" podUID="997845b0-4e97-4c8b-ab81-002dc6722592" pod="tigera-operator/tigera-operator-5bf8dfcb4-pgpp9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:57.291138 kubelet[2714]: I0813 00:50:57.290864 2714 kubelet.go:2306] "Pod admission denied" podUID="5a400a5e-67ce-4181-856f-953614a87c35" pod="tigera-operator/tigera-operator-5bf8dfcb4-jdk95" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:57.406733 kubelet[2714]: I0813 00:50:57.406498 2714 kubelet.go:2306] "Pod admission denied" podUID="8cbc59f1-c79d-4461-84b8-9a4bb6f72839" pod="tigera-operator/tigera-operator-5bf8dfcb4-9pw2j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:57.490115 kubelet[2714]: I0813 00:50:57.490077 2714 kubelet.go:2306] "Pod admission denied" podUID="33ed108c-0923-4bfb-97f4-ea388d1cf906" pod="tigera-operator/tigera-operator-5bf8dfcb4-8ljcs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:57.590620 kubelet[2714]: I0813 00:50:57.590371 2714 kubelet.go:2306] "Pod admission denied" podUID="7e43c897-f1c6-47c9-b7da-7cd441dad74d" pod="tigera-operator/tigera-operator-5bf8dfcb4-dlwdf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:57.691508 kubelet[2714]: I0813 00:50:57.691462 2714 kubelet.go:2306] "Pod admission denied" podUID="7fcdfc1f-352d-489f-b47d-81484ab20e36" pod="tigera-operator/tigera-operator-5bf8dfcb4-b2798" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:57.788670 kubelet[2714]: I0813 00:50:57.788621 2714 kubelet.go:2306] "Pod admission denied" podUID="3fea4c2c-4582-423c-ba8f-aad48fd4d7bd" pod="tigera-operator/tigera-operator-5bf8dfcb4-vn4f4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:57.996018 kubelet[2714]: I0813 00:50:57.995958 2714 kubelet.go:2306] "Pod admission denied" podUID="b43aab8a-2c12-4d6d-8186-4cd82d3ea666" pod="tigera-operator/tigera-operator-5bf8dfcb4-b5glt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:58.091676 kubelet[2714]: I0813 00:50:58.091627 2714 kubelet.go:2306] "Pod admission denied" podUID="49e093b9-a9c7-4c78-b368-a7fc382aeb2f" pod="tigera-operator/tigera-operator-5bf8dfcb4-shngr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:58.190960 kubelet[2714]: I0813 00:50:58.190911 2714 kubelet.go:2306] "Pod admission denied" podUID="edd95e79-d02b-48ba-9510-1bf17388605b" pod="tigera-operator/tigera-operator-5bf8dfcb4-49tkc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:58.292442 kubelet[2714]: I0813 00:50:58.292313 2714 kubelet.go:2306] "Pod admission denied" podUID="d8bf4af8-69c9-4b3f-a6b9-6380422fe177" pod="tigera-operator/tigera-operator-5bf8dfcb4-5j6q6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:58.342760 kubelet[2714]: I0813 00:50:58.342710 2714 kubelet.go:2306] "Pod admission denied" podUID="4d81cd00-28e3-46b1-ac0e-78525b9b2218" pod="tigera-operator/tigera-operator-5bf8dfcb4-fdlmh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:58.392879 systemd[1]: Started sshd@10-172.234.199.101:22-111.21.235.42:35850.service - OpenSSH per-connection server daemon (111.21.235.42:35850). Aug 13 00:50:58.441109 kubelet[2714]: I0813 00:50:58.441060 2714 kubelet.go:2306] "Pod admission denied" podUID="1a400748-060a-4546-86d4-7f4f50893c9a" pod="tigera-operator/tigera-operator-5bf8dfcb4-vbbsb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:58.645796 kubelet[2714]: I0813 00:50:58.645753 2714 kubelet.go:2306] "Pod admission denied" podUID="79cca8fa-3cfc-4333-bba0-d9cc7e4f2bb6" pod="tigera-operator/tigera-operator-5bf8dfcb4-t7jh6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:58.743422 kubelet[2714]: I0813 00:50:58.743367 2714 kubelet.go:2306] "Pod admission denied" podUID="34cdb589-8fa8-4747-808f-0bdcbb486e49" pod="tigera-operator/tigera-operator-5bf8dfcb4-jwkbf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:58.839923 kubelet[2714]: I0813 00:50:58.839712 2714 kubelet.go:2306] "Pod admission denied" podUID="65093fdd-41b1-4ca7-a54c-296c3eba28ef" pod="tigera-operator/tigera-operator-5bf8dfcb4-pxh6j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:58.940659 kubelet[2714]: I0813 00:50:58.940562 2714 kubelet.go:2306] "Pod admission denied" podUID="68e43f07-373d-41a6-b196-9d179e01f343" pod="tigera-operator/tigera-operator-5bf8dfcb4-p9n7p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:58.955532 kubelet[2714]: E0813 00:50:58.955493 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:50:58.956236 containerd[1575]: time="2025-08-13T00:50:58.956154513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:59.010902 containerd[1575]: time="2025-08-13T00:50:59.010844419Z" level=error msg="Failed to destroy network for sandbox \"8d3261d9cc5849e72c2b021f217242535a492d1f145cbe119da2af5c97e04362\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:59.013537 containerd[1575]: time="2025-08-13T00:50:59.013446609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d3261d9cc5849e72c2b021f217242535a492d1f145cbe119da2af5c97e04362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:59.013860 systemd[1]: run-netns-cni\x2d538e56db\x2dbd81\x2da3ee\x2da6a9\x2d2a5a003cef42.mount: Deactivated successfully. Aug 13 00:50:59.014074 kubelet[2714]: E0813 00:50:59.013829 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d3261d9cc5849e72c2b021f217242535a492d1f145cbe119da2af5c97e04362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:50:59.014907 kubelet[2714]: E0813 00:50:59.013907 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d3261d9cc5849e72c2b021f217242535a492d1f145cbe119da2af5c97e04362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:59.014907 kubelet[2714]: E0813 00:50:59.014381 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d3261d9cc5849e72c2b021f217242535a492d1f145cbe119da2af5c97e04362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:50:59.014907 kubelet[2714]: E0813 00:50:59.014454 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d3261d9cc5849e72c2b021f217242535a492d1f145cbe119da2af5c97e04362\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:50:59.042183 kubelet[2714]: I0813 00:50:59.042145 2714 kubelet.go:2306] "Pod admission denied" podUID="57ec1802-6a00-431c-9ce2-799a298bba6a" pod="tigera-operator/tigera-operator-5bf8dfcb4-wvlgd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:59.140573 kubelet[2714]: I0813 00:50:59.140535 2714 kubelet.go:2306] "Pod admission denied" podUID="eecd6335-98a4-4bfa-8eee-a7ba07ad6db8" pod="tigera-operator/tigera-operator-5bf8dfcb4-xrgms" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:59.188175 kubelet[2714]: I0813 00:50:59.188149 2714 kubelet.go:2306] "Pod admission denied" podUID="179a5dc6-397e-4b8c-813c-ba9db75d9915" pod="tigera-operator/tigera-operator-5bf8dfcb4-rq4jf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:59.288513 kubelet[2714]: I0813 00:50:59.288408 2714 kubelet.go:2306] "Pod admission denied" podUID="5158e9de-b33e-4af7-84fd-1e46baa7bfe8" pod="tigera-operator/tigera-operator-5bf8dfcb4-dr42z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:59.390295 kubelet[2714]: I0813 00:50:59.390266 2714 kubelet.go:2306] "Pod admission denied" podUID="ca041009-9908-450a-a4f5-11c45c509d88" pod="tigera-operator/tigera-operator-5bf8dfcb4-fklsr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:59.490960 kubelet[2714]: I0813 00:50:59.490921 2714 kubelet.go:2306] "Pod admission denied" podUID="12483a80-ba93-4697-946e-7fda3c372f30" pod="tigera-operator/tigera-operator-5bf8dfcb4-ckfn5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:59.593774 kubelet[2714]: I0813 00:50:59.593679 2714 kubelet.go:2306] "Pod admission denied" podUID="5892b02d-ed64-4dda-a124-41dedef5a4c1" pod="tigera-operator/tigera-operator-5bf8dfcb4-ltrv2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:59.638819 kubelet[2714]: I0813 00:50:59.638784 2714 kubelet.go:2306] "Pod admission denied" podUID="15a5af76-1666-4195-bdbf-c4eaac307907" pod="tigera-operator/tigera-operator-5bf8dfcb4-m8n4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:59.740050 kubelet[2714]: I0813 00:50:59.740021 2714 kubelet.go:2306] "Pod admission denied" podUID="74d53d46-6963-4495-bb84-004e1ee3b22d" pod="tigera-operator/tigera-operator-5bf8dfcb4-gpl6b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:59.838750 kubelet[2714]: I0813 00:50:59.838718 2714 kubelet.go:2306] "Pod admission denied" podUID="49d41a9d-d9db-4fb2-946e-38057e56e22b" pod="tigera-operator/tigera-operator-5bf8dfcb4-7xzt4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:50:59.940836 kubelet[2714]: I0813 00:50:59.940803 2714 kubelet.go:2306] "Pod admission denied" podUID="cad2a675-a892-425b-95df-206b436e3c0a" pod="tigera-operator/tigera-operator-5bf8dfcb4-dx2mm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:00.149868 kubelet[2714]: I0813 00:51:00.149793 2714 kubelet.go:2306] "Pod admission denied" podUID="a8134dee-9d57-4d48-9835-8ae1fe50c3ce" pod="tigera-operator/tigera-operator-5bf8dfcb4-7cww5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:00.240254 kubelet[2714]: I0813 00:51:00.240130 2714 kubelet.go:2306] "Pod admission denied" podUID="449c8804-6856-4955-98b5-082279f6f654" pod="tigera-operator/tigera-operator-5bf8dfcb4-qvlbn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:00.340575 kubelet[2714]: I0813 00:51:00.340536 2714 kubelet.go:2306] "Pod admission denied" podUID="e80fe5d0-a675-4f1c-bf68-659d96d08d8b" pod="tigera-operator/tigera-operator-5bf8dfcb4-6f2h2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:00.441270 kubelet[2714]: I0813 00:51:00.441224 2714 kubelet.go:2306] "Pod admission denied" podUID="4b6c25ff-204f-4ee1-b5ff-4b7f1ca65ff0" pod="tigera-operator/tigera-operator-5bf8dfcb4-hr9jc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:00.540048 kubelet[2714]: I0813 00:51:00.539938 2714 kubelet.go:2306] "Pod admission denied" podUID="efdd92f3-5bd3-4f4c-b1ce-6ff7a092abaf" pod="tigera-operator/tigera-operator-5bf8dfcb4-dr7bw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:00.640540 kubelet[2714]: I0813 00:51:00.640477 2714 kubelet.go:2306] "Pod admission denied" podUID="18b68d12-9693-4777-b53e-d938aa4ba824" pod="tigera-operator/tigera-operator-5bf8dfcb4-5f4gc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:00.740550 kubelet[2714]: I0813 00:51:00.740499 2714 kubelet.go:2306] "Pod admission denied" podUID="d69016da-5ba2-459b-af96-19ad84e3f526" pod="tigera-operator/tigera-operator-5bf8dfcb4-w77kh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:00.838895 kubelet[2714]: I0813 00:51:00.838793 2714 kubelet.go:2306] "Pod admission denied" podUID="faa0b5cb-6886-405a-9890-e2d071975261" pod="tigera-operator/tigera-operator-5bf8dfcb4-sq6kc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:00.888131 kubelet[2714]: I0813 00:51:00.888104 2714 kubelet.go:2306] "Pod admission denied" podUID="80e13389-cc56-4c7b-b0a7-fff9451b09e4" pod="tigera-operator/tigera-operator-5bf8dfcb4-rckn7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:00.988861 kubelet[2714]: I0813 00:51:00.988826 2714 kubelet.go:2306] "Pod admission denied" podUID="41733b73-bb7d-4a7f-95c2-2e14d779a118" pod="tigera-operator/tigera-operator-5bf8dfcb4-kf57v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:01.090683 kubelet[2714]: I0813 00:51:01.090591 2714 kubelet.go:2306] "Pod admission denied" podUID="dd1bfa07-a700-4727-a27d-5c8835fcbdba" pod="tigera-operator/tigera-operator-5bf8dfcb4-78d79" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:01.189546 kubelet[2714]: I0813 00:51:01.189492 2714 kubelet.go:2306] "Pod admission denied" podUID="60293041-f791-4c70-a2b5-b2d779f01f64" pod="tigera-operator/tigera-operator-5bf8dfcb4-4lrnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:01.288410 kubelet[2714]: I0813 00:51:01.288369 2714 kubelet.go:2306] "Pod admission denied" podUID="124c7236-6159-44c8-b0dc-0e70dcc80e80" pod="tigera-operator/tigera-operator-5bf8dfcb4-kn6z9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:01.330179 sshd[4367]: Invalid user alain from 111.21.235.42 port 35850 Aug 13 00:51:01.389784 kubelet[2714]: I0813 00:51:01.389749 2714 kubelet.go:2306] "Pod admission denied" podUID="5b00c7f5-5a5e-4e2d-80c1-f31f3f1d75f4" pod="tigera-operator/tigera-operator-5bf8dfcb4-kvwpd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:01.589832 kubelet[2714]: I0813 00:51:01.589786 2714 kubelet.go:2306] "Pod admission denied" podUID="d923f270-78b0-4947-9559-284ee2f95b72" pod="tigera-operator/tigera-operator-5bf8dfcb4-d4thb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:01.691053 kubelet[2714]: I0813 00:51:01.690790 2714 kubelet.go:2306] "Pod admission denied" podUID="76f443a1-03b3-4899-baf9-3dbe9a61a518" pod="tigera-operator/tigera-operator-5bf8dfcb4-pl8wh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:01.740096 kubelet[2714]: I0813 00:51:01.740059 2714 kubelet.go:2306] "Pod admission denied" podUID="2ad39e29-0e52-44b0-924a-3e376276a555" pod="tigera-operator/tigera-operator-5bf8dfcb4-pwn4v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:01.839171 kubelet[2714]: I0813 00:51:01.839131 2714 kubelet.go:2306] "Pod admission denied" podUID="14ecea4c-3140-4b56-9454-d8aa50914da8" pod="tigera-operator/tigera-operator-5bf8dfcb4-g4bdx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:01.939656 kubelet[2714]: I0813 00:51:01.939612 2714 kubelet.go:2306] "Pod admission denied" podUID="3ce92870-72a2-4e32-b534-e06a57cb8084" pod="tigera-operator/tigera-operator-5bf8dfcb4-sh879" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:01.956021 kubelet[2714]: E0813 00:51:01.955541 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:02.006653 sshd-session[4396]: pam_faillock(sshd:auth): User unknown Aug 13 00:51:02.012549 sshd[4367]: Postponed keyboard-interactive for invalid user alain from 111.21.235.42 port 35850 ssh2 [preauth] Aug 13 00:51:02.039876 kubelet[2714]: I0813 00:51:02.039836 2714 kubelet.go:2306] "Pod admission denied" podUID="f1f8fe16-9d0a-4641-b787-c79deeaae113" pod="tigera-operator/tigera-operator-5bf8dfcb4-44krj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:02.146170 kubelet[2714]: I0813 00:51:02.145708 2714 kubelet.go:2306] "Pod admission denied" podUID="192c9c68-a262-4b80-ab52-01f7eff29f43" pod="tigera-operator/tigera-operator-5bf8dfcb4-ltzs5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:02.241548 kubelet[2714]: I0813 00:51:02.241421 2714 kubelet.go:2306] "Pod admission denied" podUID="5cded4b4-091a-40b9-8b2b-03ba416bae40" pod="tigera-operator/tigera-operator-5bf8dfcb4-5n47f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:02.350148 kubelet[2714]: I0813 00:51:02.350089 2714 kubelet.go:2306] "Pod admission denied" podUID="ed29d1d9-945a-43d0-88de-bb40ae17625c" pod="tigera-operator/tigera-operator-5bf8dfcb4-z99zj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:02.442084 kubelet[2714]: I0813 00:51:02.442032 2714 kubelet.go:2306] "Pod admission denied" podUID="694372c6-d68b-4250-8c88-9536eae29be3" pod="tigera-operator/tigera-operator-5bf8dfcb4-nb2z8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:02.641086 kubelet[2714]: I0813 00:51:02.641041 2714 kubelet.go:2306] "Pod admission denied" podUID="f4cb745a-eb25-4924-8453-ae26533b414f" pod="tigera-operator/tigera-operator-5bf8dfcb4-fd9mc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:02.741900 kubelet[2714]: I0813 00:51:02.741845 2714 kubelet.go:2306] "Pod admission denied" podUID="7d89d093-ec58-492f-b79e-c7f91d1c71f3" pod="tigera-operator/tigera-operator-5bf8dfcb4-j9cqn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:02.839891 kubelet[2714]: I0813 00:51:02.839860 2714 kubelet.go:2306] "Pod admission denied" podUID="b413523c-2413-42b9-bc10-ab4f8b820674" pod="tigera-operator/tigera-operator-5bf8dfcb4-tbrrm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:02.941615 kubelet[2714]: I0813 00:51:02.941513 2714 kubelet.go:2306] "Pod admission denied" podUID="8454bcc6-0bb0-4e39-a0a2-5ad2c0af2c32" pod="tigera-operator/tigera-operator-5bf8dfcb4-l4tl7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:02.977277 sshd-session[4396]: pam_unix(sshd:auth): check pass; user unknown Aug 13 00:51:02.977313 sshd-session[4396]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.21.235.42 Aug 13 00:51:02.977914 sshd-session[4396]: pam_faillock(sshd:auth): User unknown Aug 13 00:51:02.989650 kubelet[2714]: I0813 00:51:02.989620 2714 kubelet.go:2306] "Pod admission denied" podUID="d0e16889-7054-42cc-ac08-18b5083d9faf" pod="tigera-operator/tigera-operator-5bf8dfcb4-nqrfb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:03.098604 kubelet[2714]: I0813 00:51:03.098560 2714 kubelet.go:2306] "Pod admission denied" podUID="c3b945f7-382b-49f9-b1db-38b91028663d" pod="tigera-operator/tigera-operator-5bf8dfcb4-kgkmn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:03.192616 kubelet[2714]: I0813 00:51:03.192496 2714 kubelet.go:2306] "Pod admission denied" podUID="717e1e23-a79d-492b-a9e8-8dae8a7b2901" pod="tigera-operator/tigera-operator-5bf8dfcb4-hqsvl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:03.292588 kubelet[2714]: I0813 00:51:03.292547 2714 kubelet.go:2306] "Pod admission denied" podUID="41230440-f319-477d-8945-7ad5840b7b9a" pod="tigera-operator/tigera-operator-5bf8dfcb4-xg6pc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:03.390924 kubelet[2714]: I0813 00:51:03.390876 2714 kubelet.go:2306] "Pod admission denied" podUID="9b6803bf-8250-463f-a366-b8111b33c01b" pod="tigera-operator/tigera-operator-5bf8dfcb4-vqh25" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:03.481123 kubelet[2714]: I0813 00:51:03.481031 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:03.481123 kubelet[2714]: I0813 00:51:03.481065 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:51:03.484722 kubelet[2714]: I0813 00:51:03.484695 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:03.499267 kubelet[2714]: I0813 00:51:03.496709 2714 kubelet.go:2306] "Pod admission denied" podUID="ca5e6180-089e-42f0-83b8-34ec3bd06667" pod="tigera-operator/tigera-operator-5bf8dfcb4-5vvvk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:03.512996 kubelet[2714]: I0813 00:51:03.512980 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:03.513207 kubelet[2714]: I0813 00:51:03.513174 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","calico-system/csi-node-driver-mmxc6","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:51:03.513324 kubelet[2714]: E0813 00:51:03.513313 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:03.513396 kubelet[2714]: E0813 00:51:03.513387 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:03.513700 kubelet[2714]: E0813 00:51:03.513670 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:03.513748 kubelet[2714]: E0813 00:51:03.513740 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:51:03.513824 kubelet[2714]: E0813 00:51:03.513814 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:03.513906 kubelet[2714]: E0813 00:51:03.513897 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:51:03.513981 kubelet[2714]: E0813 00:51:03.513973 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:51:03.514050 kubelet[2714]: E0813 00:51:03.514042 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:51:03.514128 kubelet[2714]: E0813 00:51:03.514119 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:51:03.514196 kubelet[2714]: E0813 00:51:03.514167 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:51:03.514247 kubelet[2714]: I0813 00:51:03.514239 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:03.593910 kubelet[2714]: I0813 00:51:03.593856 2714 kubelet.go:2306] "Pod admission denied" podUID="dc741360-5b34-4f78-b1e7-96766057c1cf" pod="tigera-operator/tigera-operator-5bf8dfcb4-nnmcw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:03.694206 kubelet[2714]: I0813 00:51:03.694166 2714 kubelet.go:2306] "Pod admission denied" podUID="5b20f2bd-dd2b-4305-86ac-542c5c35d377" pod="tigera-operator/tigera-operator-5bf8dfcb4-64bvp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:03.894154 kubelet[2714]: I0813 00:51:03.893917 2714 kubelet.go:2306] "Pod admission denied" podUID="76b07b51-14ba-44e0-9705-1d59cd0ae5eb" pod="tigera-operator/tigera-operator-5bf8dfcb4-b5tbf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:03.992540 kubelet[2714]: I0813 00:51:03.991707 2714 kubelet.go:2306] "Pod admission denied" podUID="4f3d5453-60d1-40ca-b430-8bf72cfec016" pod="tigera-operator/tigera-operator-5bf8dfcb4-7tx47" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:04.042167 kubelet[2714]: I0813 00:51:04.042134 2714 kubelet.go:2306] "Pod admission denied" podUID="d8041bbf-1566-45a8-b613-74c788f8b8ee" pod="tigera-operator/tigera-operator-5bf8dfcb4-shs78" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:04.142181 kubelet[2714]: I0813 00:51:04.142142 2714 kubelet.go:2306] "Pod admission denied" podUID="bf5842ee-2c1e-4f74-9697-cec41e82b1fd" pod="tigera-operator/tigera-operator-5bf8dfcb4-8g68z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:04.342371 kubelet[2714]: I0813 00:51:04.341159 2714 kubelet.go:2306] "Pod admission denied" podUID="96308e33-e5c3-42a1-9060-2549e8310312" pod="tigera-operator/tigera-operator-5bf8dfcb4-qbzlv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:04.442407 kubelet[2714]: I0813 00:51:04.442358 2714 kubelet.go:2306] "Pod admission denied" podUID="96fd8a4e-6b29-46e0-b500-c21637ed27ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-zl8n8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:04.545836 kubelet[2714]: I0813 00:51:04.545779 2714 kubelet.go:2306] "Pod admission denied" podUID="b43d2ca5-9290-4967-9b08-a9718d829a6f" pod="tigera-operator/tigera-operator-5bf8dfcb4-sdnrk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:04.684621 sshd[4367]: PAM: Permission denied for illegal user alain from 111.21.235.42 Aug 13 00:51:04.685113 sshd[4367]: Failed keyboard-interactive/pam for invalid user alain from 111.21.235.42 port 35850 ssh2 Aug 13 00:51:04.742908 kubelet[2714]: I0813 00:51:04.742873 2714 kubelet.go:2306] "Pod admission denied" podUID="7fdcc508-cad7-4e5a-b5ed-d43002de522b" pod="tigera-operator/tigera-operator-5bf8dfcb4-rftmd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:04.841660 kubelet[2714]: I0813 00:51:04.841604 2714 kubelet.go:2306] "Pod admission denied" podUID="552f3600-867d-43c6-8fb3-305a41b2a2be" pod="tigera-operator/tigera-operator-5bf8dfcb4-wwqlm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:04.941838 kubelet[2714]: I0813 00:51:04.941706 2714 kubelet.go:2306] "Pod admission denied" podUID="f08732b9-c6f1-4fac-8f6d-68630aaf4a33" pod="tigera-operator/tigera-operator-5bf8dfcb4-gfjzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:04.955751 containerd[1575]: time="2025-08-13T00:51:04.955694474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:51:05.007216 containerd[1575]: time="2025-08-13T00:51:05.007165077Z" level=error msg="Failed to destroy network for sandbox \"a99a0e5e5df7afb42142e3609b27048fdea7a0e0ccba090f356d4a092ad12fe8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:05.009162 systemd[1]: run-netns-cni\x2dd7be51a4\x2dafe7\x2dccae\x2da47d\x2d8aeae77ac66b.mount: Deactivated successfully. Aug 13 00:51:05.011190 containerd[1575]: time="2025-08-13T00:51:05.011139443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a99a0e5e5df7afb42142e3609b27048fdea7a0e0ccba090f356d4a092ad12fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:05.011348 kubelet[2714]: E0813 00:51:05.011319 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a99a0e5e5df7afb42142e3609b27048fdea7a0e0ccba090f356d4a092ad12fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:05.011408 kubelet[2714]: E0813 00:51:05.011366 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a99a0e5e5df7afb42142e3609b27048fdea7a0e0ccba090f356d4a092ad12fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:05.011500 kubelet[2714]: E0813 00:51:05.011445 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a99a0e5e5df7afb42142e3609b27048fdea7a0e0ccba090f356d4a092ad12fe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:05.011564 kubelet[2714]: E0813 00:51:05.011496 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a99a0e5e5df7afb42142e3609b27048fdea7a0e0ccba090f356d4a092ad12fe8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:51:05.151560 kubelet[2714]: I0813 00:51:05.150678 2714 kubelet.go:2306] "Pod admission denied" podUID="934a42c2-5e4c-4eb0-a923-376173020af2" pod="tigera-operator/tigera-operator-5bf8dfcb4-b29l5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:05.242235 kubelet[2714]: I0813 00:51:05.242100 2714 kubelet.go:2306] "Pod admission denied" podUID="f8ffc173-aa2a-4637-962a-1ae177942f91" pod="tigera-operator/tigera-operator-5bf8dfcb4-hk2md" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:05.343031 kubelet[2714]: I0813 00:51:05.342988 2714 kubelet.go:2306] "Pod admission denied" podUID="76ca3771-1426-4ac7-9d25-98c8497b27ae" pod="tigera-operator/tigera-operator-5bf8dfcb4-dc9bq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:05.357904 sshd[4367]: Connection closed by invalid user alain 111.21.235.42 port 35850 [preauth] Aug 13 00:51:05.360411 systemd[1]: sshd@10-172.234.199.101:22-111.21.235.42:35850.service: Deactivated successfully. Aug 13 00:51:05.442335 kubelet[2714]: I0813 00:51:05.442286 2714 kubelet.go:2306] "Pod admission denied" podUID="fa3be7db-bb41-4dc2-a199-380025da883f" pod="tigera-operator/tigera-operator-5bf8dfcb4-2jvgm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:05.490241 kubelet[2714]: I0813 00:51:05.490192 2714 kubelet.go:2306] "Pod admission denied" podUID="a362b953-475d-43c3-9bae-384739874f3b" pod="tigera-operator/tigera-operator-5bf8dfcb4-rhddz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:05.591928 kubelet[2714]: I0813 00:51:05.591821 2714 kubelet.go:2306] "Pod admission denied" podUID="77b38fa3-bfea-4a4b-bcc7-76274062fe12" pod="tigera-operator/tigera-operator-5bf8dfcb4-l6jkr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:05.699949 kubelet[2714]: I0813 00:51:05.699899 2714 kubelet.go:2306] "Pod admission denied" podUID="c8018c20-37dc-4c88-bb0a-76355c15e11b" pod="tigera-operator/tigera-operator-5bf8dfcb4-5drqf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:05.792784 kubelet[2714]: I0813 00:51:05.792734 2714 kubelet.go:2306] "Pod admission denied" podUID="9bf1834e-4c90-4e51-a01a-e67088611f96" pod="tigera-operator/tigera-operator-5bf8dfcb4-gpww6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:05.891978 kubelet[2714]: I0813 00:51:05.891946 2714 kubelet.go:2306] "Pod admission denied" podUID="94048cdc-4abd-4489-98af-a430f817afab" pod="tigera-operator/tigera-operator-5bf8dfcb4-f46lg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:05.997419 kubelet[2714]: I0813 00:51:05.997354 2714 kubelet.go:2306] "Pod admission denied" podUID="856cdf16-d978-401e-a921-fcbec2f814c4" pod="tigera-operator/tigera-operator-5bf8dfcb4-6szq7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:06.228600 kubelet[2714]: I0813 00:51:06.228496 2714 kubelet.go:2306] "Pod admission denied" podUID="b9c95e65-6ee8-4c34-9873-f805103e3b22" pod="tigera-operator/tigera-operator-5bf8dfcb4-m899f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:06.441326 kubelet[2714]: I0813 00:51:06.441281 2714 kubelet.go:2306] "Pod admission denied" podUID="f1c2c1b9-9aee-4361-94f2-f5da1bbd6e36" pod="tigera-operator/tigera-operator-5bf8dfcb4-vblz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:06.542511 kubelet[2714]: I0813 00:51:06.542388 2714 kubelet.go:2306] "Pod admission denied" podUID="747182d9-bfbf-4ef4-9569-d70eb95ba870" pod="tigera-operator/tigera-operator-5bf8dfcb4-snflk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:06.746542 kubelet[2714]: I0813 00:51:06.745821 2714 kubelet.go:2306] "Pod admission denied" podUID="9d506b54-ec9e-426e-ae98-563d134f0aa7" pod="tigera-operator/tigera-operator-5bf8dfcb4-2q6gx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:06.840824 kubelet[2714]: I0813 00:51:06.840717 2714 kubelet.go:2306] "Pod admission denied" podUID="7c804b63-c946-4eec-8e82-7de9bccb410d" pod="tigera-operator/tigera-operator-5bf8dfcb4-nm274" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:06.941145 kubelet[2714]: I0813 00:51:06.941102 2714 kubelet.go:2306] "Pod admission denied" podUID="d557f0d4-7a87-4194-973d-447275f38f3e" pod="tigera-operator/tigera-operator-5bf8dfcb4-r6lhx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:06.955892 kubelet[2714]: E0813 00:51:06.955699 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:06.955892 kubelet[2714]: E0813 00:51:06.955830 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:07.143121 kubelet[2714]: I0813 00:51:07.143075 2714 kubelet.go:2306] "Pod admission denied" podUID="6806fee9-983d-4f0d-92ca-623df0dd5cc7" pod="tigera-operator/tigera-operator-5bf8dfcb4-q6xbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:07.249134 kubelet[2714]: I0813 00:51:07.249069 2714 kubelet.go:2306] "Pod admission denied" podUID="42261ee4-9ae6-4c05-bf35-2daabef36177" pod="tigera-operator/tigera-operator-5bf8dfcb4-cxx2s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:07.290514 kubelet[2714]: I0813 00:51:07.290480 2714 kubelet.go:2306] "Pod admission denied" podUID="9fc3d13a-c929-4930-9888-11590c4dcd53" pod="tigera-operator/tigera-operator-5bf8dfcb4-9zntx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:07.389645 kubelet[2714]: I0813 00:51:07.389595 2714 kubelet.go:2306] "Pod admission denied" podUID="e5af000e-920b-477a-9df7-46cefb506474" pod="tigera-operator/tigera-operator-5bf8dfcb4-chxmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:07.593313 kubelet[2714]: I0813 00:51:07.593191 2714 kubelet.go:2306] "Pod admission denied" podUID="f3dacdfd-daf3-481e-a1af-04174d4bf124" pod="tigera-operator/tigera-operator-5bf8dfcb4-97999" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:07.696242 kubelet[2714]: I0813 00:51:07.696201 2714 kubelet.go:2306] "Pod admission denied" podUID="9e94ed2d-97aa-43a4-8db8-b54cbc3c8425" pod="tigera-operator/tigera-operator-5bf8dfcb4-zwflq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:07.745187 kubelet[2714]: I0813 00:51:07.745139 2714 kubelet.go:2306] "Pod admission denied" podUID="4c87702c-3294-496c-a0b1-9daf3fcfc3ad" pod="tigera-operator/tigera-operator-5bf8dfcb4-tbf4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:07.840182 kubelet[2714]: I0813 00:51:07.840139 2714 kubelet.go:2306] "Pod admission denied" podUID="33ad6787-8361-4638-83d3-fec0c3d7d455" pod="tigera-operator/tigera-operator-5bf8dfcb4-rqgp7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.045639 kubelet[2714]: I0813 00:51:08.045590 2714 kubelet.go:2306] "Pod admission denied" podUID="850536b4-42b8-44f2-aa48-e794a86081d6" pod="tigera-operator/tigera-operator-5bf8dfcb4-9b6hl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.141946 kubelet[2714]: I0813 00:51:08.141894 2714 kubelet.go:2306] "Pod admission denied" podUID="5ea59b49-d5a8-4d15-a8d5-fa9a94025d79" pod="tigera-operator/tigera-operator-5bf8dfcb4-lss2g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.244124 kubelet[2714]: I0813 00:51:08.244073 2714 kubelet.go:2306] "Pod admission denied" podUID="1bce4bdf-bfc3-467d-8675-d0d6f291e33b" pod="tigera-operator/tigera-operator-5bf8dfcb4-8xnh7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.343101 kubelet[2714]: I0813 00:51:08.342979 2714 kubelet.go:2306] "Pod admission denied" podUID="6cf6099b-b90f-4b2f-8ef5-1fa9597d66aa" pod="tigera-operator/tigera-operator-5bf8dfcb4-9xn8b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.389194 kubelet[2714]: I0813 00:51:08.389159 2714 kubelet.go:2306] "Pod admission denied" podUID="06c06d48-d4cc-436d-8f82-e0a0a524a601" pod="tigera-operator/tigera-operator-5bf8dfcb4-bh6wl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.493008 kubelet[2714]: I0813 00:51:08.492957 2714 kubelet.go:2306] "Pod admission denied" podUID="8fc2232e-2b9a-4d73-9776-781d9ba6c9e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-rfpml" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.593495 kubelet[2714]: I0813 00:51:08.593365 2714 kubelet.go:2306] "Pod admission denied" podUID="e8c80853-2cfa-4559-a3c2-96f393043d21" pod="tigera-operator/tigera-operator-5bf8dfcb4-5cbvc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.639739 kubelet[2714]: I0813 00:51:08.639496 2714 kubelet.go:2306] "Pod admission denied" podUID="e52f356a-f433-4afc-aa9e-d110707b885a" pod="tigera-operator/tigera-operator-5bf8dfcb4-m4fnz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.743507 kubelet[2714]: I0813 00:51:08.743456 2714 kubelet.go:2306] "Pod admission denied" podUID="ed25cb2e-bdee-4255-813f-a2bff003e3cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-z5gtc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.842381 kubelet[2714]: I0813 00:51:08.842322 2714 kubelet.go:2306] "Pod admission denied" podUID="68a220ff-3dd5-4d9c-9ec8-24781f2d8539" pod="tigera-operator/tigera-operator-5bf8dfcb4-prkzt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.942999 kubelet[2714]: I0813 00:51:08.942948 2714 kubelet.go:2306] "Pod admission denied" podUID="d5162385-9770-4fc1-8611-70db5975c406" pod="tigera-operator/tigera-operator-5bf8dfcb4-xxpxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:08.955857 kubelet[2714]: E0813 00:51:08.955304 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:08.955984 containerd[1575]: time="2025-08-13T00:51:08.955703189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:09.012408 containerd[1575]: time="2025-08-13T00:51:09.012276258Z" level=error msg="Failed to destroy network for sandbox \"d624bf582e101bb6391328e0a2a9df4f5351f67ea75060a9ecfe5f1e329d38f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:09.014269 systemd[1]: run-netns-cni\x2d31882e0e\x2d2b02\x2d89be\x2d95c2\x2dc7d3f8c05e26.mount: Deactivated successfully. Aug 13 00:51:09.016399 containerd[1575]: time="2025-08-13T00:51:09.016323042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d624bf582e101bb6391328e0a2a9df4f5351f67ea75060a9ecfe5f1e329d38f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:09.016809 kubelet[2714]: E0813 00:51:09.016747 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d624bf582e101bb6391328e0a2a9df4f5351f67ea75060a9ecfe5f1e329d38f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:09.016875 kubelet[2714]: E0813 00:51:09.016854 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d624bf582e101bb6391328e0a2a9df4f5351f67ea75060a9ecfe5f1e329d38f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:09.016902 kubelet[2714]: E0813 00:51:09.016878 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d624bf582e101bb6391328e0a2a9df4f5351f67ea75060a9ecfe5f1e329d38f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:09.017205 kubelet[2714]: E0813 00:51:09.016934 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d624bf582e101bb6391328e0a2a9df4f5351f67ea75060a9ecfe5f1e329d38f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:51:09.043464 kubelet[2714]: I0813 00:51:09.043414 2714 kubelet.go:2306] "Pod admission denied" podUID="f42a5b42-acb9-4248-b77e-7670b48de628" pod="tigera-operator/tigera-operator-5bf8dfcb4-smrd5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:09.144657 kubelet[2714]: I0813 00:51:09.144595 2714 kubelet.go:2306] "Pod admission denied" podUID="7eb315aa-3064-41a7-84b4-0bb5ce1ebd92" pod="tigera-operator/tigera-operator-5bf8dfcb4-7h6l2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:09.242337 kubelet[2714]: I0813 00:51:09.242223 2714 kubelet.go:2306] "Pod admission denied" podUID="92a76527-82f7-44e5-9b13-ba7aa71490b6" pod="tigera-operator/tigera-operator-5bf8dfcb4-fstrl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:09.293448 kubelet[2714]: I0813 00:51:09.293414 2714 kubelet.go:2306] "Pod admission denied" podUID="7feb54ac-819e-4987-8941-0adf5b1476e9" pod="tigera-operator/tigera-operator-5bf8dfcb4-mj86z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:09.392000 kubelet[2714]: I0813 00:51:09.391960 2714 kubelet.go:2306] "Pod admission denied" podUID="134cf8fb-4726-4f18-b68f-ce1dc61fce5f" pod="tigera-operator/tigera-operator-5bf8dfcb4-9t5pk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:09.497095 kubelet[2714]: I0813 00:51:09.496959 2714 kubelet.go:2306] "Pod admission denied" podUID="27ff3902-d76a-464d-b9c7-d32e876692b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-z5plm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:09.594902 kubelet[2714]: I0813 00:51:09.594860 2714 kubelet.go:2306] "Pod admission denied" podUID="0705df57-e3ea-49c6-aa65-8a95b5bdb3d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-2r4b2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:09.694393 kubelet[2714]: I0813 00:51:09.694347 2714 kubelet.go:2306] "Pod admission denied" podUID="4149286f-6fc7-44a3-9d36-31cccfdcdce7" pod="tigera-operator/tigera-operator-5bf8dfcb4-rpm8c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:09.790717 kubelet[2714]: I0813 00:51:09.790471 2714 kubelet.go:2306] "Pod admission denied" podUID="9733549a-350f-4a0a-8c13-c76b712eb6b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-vnclm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:09.890664 kubelet[2714]: I0813 00:51:09.890622 2714 kubelet.go:2306] "Pod admission denied" podUID="4d915aa7-75d9-4a05-9ae1-2a4d62e53d49" pod="tigera-operator/tigera-operator-5bf8dfcb4-4b97b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:09.955671 containerd[1575]: time="2025-08-13T00:51:09.955609308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:51:10.001106 containerd[1575]: time="2025-08-13T00:51:10.001059515Z" level=error msg="Failed to destroy network for sandbox \"d84a6a51bfa59498ba39c8e1e4236059ba7398e26634db395069ace5a7f4d942\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:10.002432 containerd[1575]: time="2025-08-13T00:51:10.002396003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84a6a51bfa59498ba39c8e1e4236059ba7398e26634db395069ace5a7f4d942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:10.002752 kubelet[2714]: E0813 00:51:10.002705 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84a6a51bfa59498ba39c8e1e4236059ba7398e26634db395069ace5a7f4d942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:10.003091 kubelet[2714]: E0813 00:51:10.002823 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84a6a51bfa59498ba39c8e1e4236059ba7398e26634db395069ace5a7f4d942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:10.003091 kubelet[2714]: E0813 00:51:10.002846 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84a6a51bfa59498ba39c8e1e4236059ba7398e26634db395069ace5a7f4d942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:10.003091 kubelet[2714]: E0813 00:51:10.002892 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d84a6a51bfa59498ba39c8e1e4236059ba7398e26634db395069ace5a7f4d942\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:51:10.004091 systemd[1]: run-netns-cni\x2d568068a0\x2d865e\x2d6263\x2d0af7\x2d69afebfc4620.mount: Deactivated successfully. Aug 13 00:51:10.014558 kubelet[2714]: I0813 00:51:10.014507 2714 kubelet.go:2306] "Pod admission denied" podUID="bca58a44-4a0e-45da-9161-b07806bcbb2e" pod="tigera-operator/tigera-operator-5bf8dfcb4-xsfp8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:10.095195 kubelet[2714]: I0813 00:51:10.095091 2714 kubelet.go:2306] "Pod admission denied" podUID="c777251a-ccf4-4d44-872b-69d863cf712b" pod="tigera-operator/tigera-operator-5bf8dfcb4-rbm7h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:10.191480 kubelet[2714]: I0813 00:51:10.191437 2714 kubelet.go:2306] "Pod admission denied" podUID="397a0611-31a4-480e-a6c3-dd1c48de4804" pod="tigera-operator/tigera-operator-5bf8dfcb4-kwj9g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:10.398540 kubelet[2714]: I0813 00:51:10.398189 2714 kubelet.go:2306] "Pod admission denied" podUID="54c0a27f-3e1c-42be-9749-ad2f17b90a42" pod="tigera-operator/tigera-operator-5bf8dfcb4-824rv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:10.491128 kubelet[2714]: I0813 00:51:10.491081 2714 kubelet.go:2306] "Pod admission denied" podUID="263e48fa-b968-4c7f-aed4-cd9d561f259d" pod="tigera-operator/tigera-operator-5bf8dfcb4-pq74s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:10.591448 kubelet[2714]: I0813 00:51:10.591403 2714 kubelet.go:2306] "Pod admission denied" podUID="d3b1abfc-e436-4c5a-8391-15795371ad55" pod="tigera-operator/tigera-operator-5bf8dfcb4-m92c4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:10.691375 kubelet[2714]: I0813 00:51:10.691265 2714 kubelet.go:2306] "Pod admission denied" podUID="ae8c77e6-c65c-40c9-abb4-e61d6594c47f" pod="tigera-operator/tigera-operator-5bf8dfcb4-mglx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:10.742400 kubelet[2714]: I0813 00:51:10.742187 2714 kubelet.go:2306] "Pod admission denied" podUID="07d7d806-accb-4a3a-bd93-6a5b75da5234" pod="tigera-operator/tigera-operator-5bf8dfcb4-dj6wr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:10.840428 kubelet[2714]: I0813 00:51:10.840382 2714 kubelet.go:2306] "Pod admission denied" podUID="b2c05c94-710d-43e1-8d8a-95644c88c3ff" pod="tigera-operator/tigera-operator-5bf8dfcb4-qcxlt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:10.940953 kubelet[2714]: I0813 00:51:10.940909 2714 kubelet.go:2306] "Pod admission denied" podUID="66f319c6-47ee-4202-bda2-5787bb0e8162" pod="tigera-operator/tigera-operator-5bf8dfcb4-5lsrm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:10.956874 kubelet[2714]: E0813 00:51:10.956636 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:51:10.990343 kubelet[2714]: I0813 00:51:10.990305 2714 kubelet.go:2306] "Pod admission denied" podUID="5492916d-a04a-4af7-9737-785493339900" pod="tigera-operator/tigera-operator-5bf8dfcb4-dv6hf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:11.093975 kubelet[2714]: I0813 00:51:11.093933 2714 kubelet.go:2306] "Pod admission denied" podUID="cfd75ede-139c-4536-9243-e2a5b56b0425" pod="tigera-operator/tigera-operator-5bf8dfcb4-4g9h4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:11.191846 kubelet[2714]: I0813 00:51:11.191803 2714 kubelet.go:2306] "Pod admission denied" podUID="633a0971-f824-48bc-8ad8-b686486e6c67" pod="tigera-operator/tigera-operator-5bf8dfcb4-dd2d7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:11.296201 kubelet[2714]: I0813 00:51:11.295789 2714 kubelet.go:2306] "Pod admission denied" podUID="ada2d659-f64e-4c8c-9f20-bb9546283e80" pod="tigera-operator/tigera-operator-5bf8dfcb4-9t8vt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:11.393542 kubelet[2714]: I0813 00:51:11.393471 2714 kubelet.go:2306] "Pod admission denied" podUID="1e3134e5-bc6f-48a9-861d-360e7b4fce64" pod="tigera-operator/tigera-operator-5bf8dfcb4-6j98r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:11.495931 kubelet[2714]: I0813 00:51:11.495885 2714 kubelet.go:2306] "Pod admission denied" podUID="32b071f9-3e1e-49b5-88bb-937a0e5e8ed2" pod="tigera-operator/tigera-operator-5bf8dfcb4-zsglw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:11.594826 kubelet[2714]: I0813 00:51:11.594718 2714 kubelet.go:2306] "Pod admission denied" podUID="0576d07a-b6c3-44e1-874b-3b73484e5241" pod="tigera-operator/tigera-operator-5bf8dfcb4-4nbkg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:11.641856 kubelet[2714]: I0813 00:51:11.641809 2714 kubelet.go:2306] "Pod admission denied" podUID="5780aec0-b2fd-49f6-ad3d-693051db2a8d" pod="tigera-operator/tigera-operator-5bf8dfcb4-59dzg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:11.741938 kubelet[2714]: I0813 00:51:11.741898 2714 kubelet.go:2306] "Pod admission denied" podUID="2941a25b-f824-4894-b5f2-33cccad9a2a2" pod="tigera-operator/tigera-operator-5bf8dfcb4-lm5j8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:11.842662 kubelet[2714]: I0813 00:51:11.842622 2714 kubelet.go:2306] "Pod admission denied" podUID="2d59f63e-0a79-430f-855c-c1399269813a" pod="tigera-operator/tigera-operator-5bf8dfcb4-njsgb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:11.941863 kubelet[2714]: I0813 00:51:11.941817 2714 kubelet.go:2306] "Pod admission denied" podUID="99f9aac0-0e93-4f1c-8549-644f04e91786" pod="tigera-operator/tigera-operator-5bf8dfcb4-q5lmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:12.043283 kubelet[2714]: I0813 00:51:12.043248 2714 kubelet.go:2306] "Pod admission denied" podUID="eedf2faf-c732-4a73-8df5-92b7e1f4abc0" pod="tigera-operator/tigera-operator-5bf8dfcb4-dznv6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:12.142461 kubelet[2714]: I0813 00:51:12.142411 2714 kubelet.go:2306] "Pod admission denied" podUID="310edf7e-9f81-4a67-bcf7-fe97185b31db" pod="tigera-operator/tigera-operator-5bf8dfcb4-ts4xm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:12.242925 kubelet[2714]: I0813 00:51:12.242825 2714 kubelet.go:2306] "Pod admission denied" podUID="fcc784b8-d820-47ab-8f02-3565839fd368" pod="tigera-operator/tigera-operator-5bf8dfcb4-g4q6h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:12.291542 kubelet[2714]: I0813 00:51:12.291256 2714 kubelet.go:2306] "Pod admission denied" podUID="697fea4f-69b1-4109-8a48-217075dffc34" pod="tigera-operator/tigera-operator-5bf8dfcb4-n9kql" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:12.392133 kubelet[2714]: I0813 00:51:12.392087 2714 kubelet.go:2306] "Pod admission denied" podUID="86ba5607-c71a-4f25-aa16-b0c02498cce7" pod="tigera-operator/tigera-operator-5bf8dfcb4-g9gtq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:12.494757 kubelet[2714]: I0813 00:51:12.494431 2714 kubelet.go:2306] "Pod admission denied" podUID="48158ec2-94e6-4172-9cdc-6070267402bb" pod="tigera-operator/tigera-operator-5bf8dfcb4-s5tt5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:12.595877 kubelet[2714]: I0813 00:51:12.595827 2714 kubelet.go:2306] "Pod admission denied" podUID="f15f4777-a365-488b-9022-8c058dc778ae" pod="tigera-operator/tigera-operator-5bf8dfcb4-tkwdj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:12.700434 kubelet[2714]: I0813 00:51:12.700152 2714 kubelet.go:2306] "Pod admission denied" podUID="55961331-78dc-4e9b-baeb-fc7033704784" pod="tigera-operator/tigera-operator-5bf8dfcb4-2skcs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:12.794554 kubelet[2714]: I0813 00:51:12.793814 2714 kubelet.go:2306] "Pod admission denied" podUID="07a5b6c6-ca8f-4b40-9a4e-2c912e18f057" pod="tigera-operator/tigera-operator-5bf8dfcb4-gqg9h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:12.993713 kubelet[2714]: I0813 00:51:12.993676 2714 kubelet.go:2306] "Pod admission denied" podUID="7114e6e1-768c-411e-ae43-6d909c7c5f2b" pod="tigera-operator/tigera-operator-5bf8dfcb4-c7tzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:13.095993 kubelet[2714]: I0813 00:51:13.095862 2714 kubelet.go:2306] "Pod admission denied" podUID="06cfae0f-581c-416d-8cbd-573c72b019e3" pod="tigera-operator/tigera-operator-5bf8dfcb4-d4pxt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:13.207018 kubelet[2714]: I0813 00:51:13.206580 2714 kubelet.go:2306] "Pod admission denied" podUID="f3b65dc5-13e0-48f0-90d3-bcc1ad88f4eb" pod="tigera-operator/tigera-operator-5bf8dfcb4-fp9b5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:13.395098 kubelet[2714]: I0813 00:51:13.395052 2714 kubelet.go:2306] "Pod admission denied" podUID="58e38d5c-7895-4495-9b69-0652abdd945e" pod="tigera-operator/tigera-operator-5bf8dfcb4-48c8t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:13.496015 kubelet[2714]: I0813 00:51:13.495962 2714 kubelet.go:2306] "Pod admission denied" podUID="07f30ecb-0736-4f96-9e9b-a9c8d5996979" pod="tigera-operator/tigera-operator-5bf8dfcb4-sj4xv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:13.527169 kubelet[2714]: I0813 00:51:13.527130 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:13.527169 kubelet[2714]: I0813 00:51:13.527159 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:51:13.531111 kubelet[2714]: I0813 00:51:13.530760 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:13.543809 kubelet[2714]: I0813 00:51:13.543790 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:13.543894 kubelet[2714]: I0813 00:51:13.543856 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","calico-system/csi-node-driver-mmxc6","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:51:13.543894 kubelet[2714]: E0813 00:51:13.543879 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:13.543894 kubelet[2714]: E0813 00:51:13.543888 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:13.543894 kubelet[2714]: E0813 00:51:13.543895 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:13.544013 kubelet[2714]: E0813 00:51:13.543902 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:51:13.544013 kubelet[2714]: E0813 00:51:13.543908 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:13.544013 kubelet[2714]: E0813 00:51:13.543918 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:51:13.544013 kubelet[2714]: E0813 00:51:13.543926 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:51:13.544013 kubelet[2714]: E0813 00:51:13.543934 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:51:13.544013 kubelet[2714]: E0813 00:51:13.543942 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:51:13.544013 kubelet[2714]: E0813 00:51:13.543949 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:51:13.544013 kubelet[2714]: I0813 00:51:13.543958 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:13.592950 kubelet[2714]: I0813 00:51:13.592913 2714 kubelet.go:2306] "Pod admission denied" podUID="5e5dcd78-3883-45d1-97f1-3a4f4418fb1c" pod="tigera-operator/tigera-operator-5bf8dfcb4-tftpj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:13.693228 kubelet[2714]: I0813 00:51:13.693116 2714 kubelet.go:2306] "Pod admission denied" podUID="2eb03320-866f-404f-ab7e-e3941d0b4813" pod="tigera-operator/tigera-operator-5bf8dfcb4-7ht77" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:13.796231 kubelet[2714]: I0813 00:51:13.796161 2714 kubelet.go:2306] "Pod admission denied" podUID="26869d6f-c89e-4c14-92ca-5413ed021659" pod="tigera-operator/tigera-operator-5bf8dfcb4-sml8n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:13.892508 kubelet[2714]: I0813 00:51:13.892462 2714 kubelet.go:2306] "Pod admission denied" podUID="3144e819-fb10-4fd2-bdf5-bd80b533feda" pod="tigera-operator/tigera-operator-5bf8dfcb4-w9986" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:13.945219 kubelet[2714]: I0813 00:51:13.943994 2714 kubelet.go:2306] "Pod admission denied" podUID="307abf7f-6ae6-43a3-b5f2-e613ccb692f2" pod="tigera-operator/tigera-operator-5bf8dfcb4-nsspv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:13.956816 kubelet[2714]: E0813 00:51:13.956757 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:13.957862 containerd[1575]: time="2025-08-13T00:51:13.957831283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:14.038507 containerd[1575]: time="2025-08-13T00:51:14.038463496Z" level=error msg="Failed to destroy network for sandbox \"5b32a16d62f1643e0e48fb799f25f4c6011cacba6bbeea24d09e5dbbfe4ae727\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:14.040416 systemd[1]: run-netns-cni\x2de482ff4c\x2db5a7\x2d15ba\x2dfa5e\x2d0194568a4906.mount: Deactivated successfully. Aug 13 00:51:14.043157 containerd[1575]: time="2025-08-13T00:51:14.043061240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b32a16d62f1643e0e48fb799f25f4c6011cacba6bbeea24d09e5dbbfe4ae727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:14.043968 kubelet[2714]: E0813 00:51:14.043617 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b32a16d62f1643e0e48fb799f25f4c6011cacba6bbeea24d09e5dbbfe4ae727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:14.045547 kubelet[2714]: E0813 00:51:14.044994 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b32a16d62f1643e0e48fb799f25f4c6011cacba6bbeea24d09e5dbbfe4ae727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:14.045547 kubelet[2714]: E0813 00:51:14.045021 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b32a16d62f1643e0e48fb799f25f4c6011cacba6bbeea24d09e5dbbfe4ae727\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:14.045547 kubelet[2714]: E0813 00:51:14.045099 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b32a16d62f1643e0e48fb799f25f4c6011cacba6bbeea24d09e5dbbfe4ae727\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:51:14.056206 kubelet[2714]: I0813 00:51:14.055877 2714 kubelet.go:2306] "Pod admission denied" podUID="e3fdf005-0b59-4fc0-a59e-9b9b11ea09ca" pod="tigera-operator/tigera-operator-5bf8dfcb4-fc2fq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:14.242048 kubelet[2714]: I0813 00:51:14.241954 2714 kubelet.go:2306] "Pod admission denied" podUID="8b73b9a5-10df-403f-86cc-cb81efd2eb6f" pod="tigera-operator/tigera-operator-5bf8dfcb4-wkvcd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:14.350921 kubelet[2714]: I0813 00:51:14.350885 2714 kubelet.go:2306] "Pod admission denied" podUID="29fa88a2-044b-4c84-a734-edebc7c4909f" pod="tigera-operator/tigera-operator-5bf8dfcb4-hs5pc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:14.441179 kubelet[2714]: I0813 00:51:14.441140 2714 kubelet.go:2306] "Pod admission denied" podUID="2a8913c3-4d37-4232-bccf-53074b129279" pod="tigera-operator/tigera-operator-5bf8dfcb4-tjmgn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:14.642763 kubelet[2714]: I0813 00:51:14.642708 2714 kubelet.go:2306] "Pod admission denied" podUID="f1a44d0b-f9fb-45b9-aa54-5c2243cc344e" pod="tigera-operator/tigera-operator-5bf8dfcb4-lvcmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:14.825780 kubelet[2714]: I0813 00:51:14.825727 2714 kubelet.go:2306] "Pod admission denied" podUID="313c495d-95ae-412f-afaf-d36d3386c00b" pod="tigera-operator/tigera-operator-5bf8dfcb4-k87wm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:14.897583 kubelet[2714]: I0813 00:51:14.897422 2714 kubelet.go:2306] "Pod admission denied" podUID="29fda3ed-c084-49a8-ab23-932f58af7ed6" pod="tigera-operator/tigera-operator-5bf8dfcb4-k9zbh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:14.993929 kubelet[2714]: I0813 00:51:14.993886 2714 kubelet.go:2306] "Pod admission denied" podUID="939ffa8f-3eb8-4f0b-8d53-703ee029dfcf" pod="tigera-operator/tigera-operator-5bf8dfcb4-tndqb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.094672 kubelet[2714]: I0813 00:51:15.094633 2714 kubelet.go:2306] "Pod admission denied" podUID="483be1c7-863f-4c1e-a518-c4b89a1526d2" pod="tigera-operator/tigera-operator-5bf8dfcb4-n9hj6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.147540 kubelet[2714]: I0813 00:51:15.146870 2714 kubelet.go:2306] "Pod admission denied" podUID="4791ab9d-35ce-4411-8050-f7b128f24380" pod="tigera-operator/tigera-operator-5bf8dfcb4-spjkf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.243349 kubelet[2714]: I0813 00:51:15.243093 2714 kubelet.go:2306] "Pod admission denied" podUID="cf308bae-69aa-49d5-8c86-2ac9b4d9d7e7" pod="tigera-operator/tigera-operator-5bf8dfcb4-kjxs9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.342950 kubelet[2714]: I0813 00:51:15.342899 2714 kubelet.go:2306] "Pod admission denied" podUID="751a5545-2d3c-4838-8c9e-22af921908b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-skjjq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.444537 kubelet[2714]: I0813 00:51:15.444471 2714 kubelet.go:2306] "Pod admission denied" podUID="80745bf4-d205-4a26-a2a4-85de4696140d" pod="tigera-operator/tigera-operator-5bf8dfcb4-bkqcq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.542766 kubelet[2714]: I0813 00:51:15.542441 2714 kubelet.go:2306] "Pod admission denied" podUID="27689067-74a0-4fc0-8c90-1c0cb3ec777a" pod="tigera-operator/tigera-operator-5bf8dfcb4-xq57m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.644801 kubelet[2714]: I0813 00:51:15.644738 2714 kubelet.go:2306] "Pod admission denied" podUID="4a8d70c9-8f87-44c2-84d3-114458f8dcbf" pod="tigera-operator/tigera-operator-5bf8dfcb4-ks29s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.744315 kubelet[2714]: I0813 00:51:15.744257 2714 kubelet.go:2306] "Pod admission denied" podUID="7f2dec5a-669b-48a6-b89a-a585f8893c57" pod="tigera-operator/tigera-operator-5bf8dfcb4-fcr7c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.796093 kubelet[2714]: I0813 00:51:15.795073 2714 kubelet.go:2306] "Pod admission denied" podUID="42e2c123-7cbf-42ac-8ddf-89359c5c0a03" pod="tigera-operator/tigera-operator-5bf8dfcb4-rws52" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.892875 kubelet[2714]: I0813 00:51:15.892834 2714 kubelet.go:2306] "Pod admission denied" podUID="63d61e5d-f752-494a-b0da-e443bb91d49c" pod="tigera-operator/tigera-operator-5bf8dfcb4-cm4b6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:15.955749 kubelet[2714]: E0813 00:51:15.955326 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:16.097820 kubelet[2714]: I0813 00:51:16.097692 2714 kubelet.go:2306] "Pod admission denied" podUID="b7526c28-2e7f-42af-af4c-2f0a8d44ce4c" pod="tigera-operator/tigera-operator-5bf8dfcb4-twpcb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:16.192426 kubelet[2714]: I0813 00:51:16.192379 2714 kubelet.go:2306] "Pod admission denied" podUID="16243408-c827-4f63-a38e-6cacfac451bf" pod="tigera-operator/tigera-operator-5bf8dfcb4-cx6bn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:16.294502 kubelet[2714]: I0813 00:51:16.294454 2714 kubelet.go:2306] "Pod admission denied" podUID="d9cc9846-52f4-4730-8a34-d395df8c7603" pod="tigera-operator/tigera-operator-5bf8dfcb4-wmdhx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:16.393621 kubelet[2714]: I0813 00:51:16.393576 2714 kubelet.go:2306] "Pod admission denied" podUID="3a49397c-bdc5-42c5-8c07-049bd4a78a76" pod="tigera-operator/tigera-operator-5bf8dfcb4-dwskm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:16.443608 kubelet[2714]: I0813 00:51:16.443562 2714 kubelet.go:2306] "Pod admission denied" podUID="70327893-2c43-4492-9e8b-4ed2cd8d86ea" pod="tigera-operator/tigera-operator-5bf8dfcb4-wrjxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:16.542962 kubelet[2714]: I0813 00:51:16.542919 2714 kubelet.go:2306] "Pod admission denied" podUID="31a3cf23-6725-44aa-835d-8a67ee6abba4" pod="tigera-operator/tigera-operator-5bf8dfcb4-d28jg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:16.644705 kubelet[2714]: I0813 00:51:16.644576 2714 kubelet.go:2306] "Pod admission denied" podUID="617684e1-d594-4258-8018-aa7e5e2da119" pod="tigera-operator/tigera-operator-5bf8dfcb4-nr2bj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:16.743425 kubelet[2714]: I0813 00:51:16.743372 2714 kubelet.go:2306] "Pod admission denied" podUID="d1570d88-51ac-4572-bad3-65284fdf76fb" pod="tigera-operator/tigera-operator-5bf8dfcb4-zc5fc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:16.953137 kubelet[2714]: I0813 00:51:16.952612 2714 kubelet.go:2306] "Pod admission denied" podUID="cbfac632-b2b2-44ed-bce3-cf07db647d69" pod="tigera-operator/tigera-operator-5bf8dfcb4-xz8bn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:17.044538 kubelet[2714]: I0813 00:51:17.044479 2714 kubelet.go:2306] "Pod admission denied" podUID="5176db79-6d1c-466d-a6ed-b99ddf732782" pod="tigera-operator/tigera-operator-5bf8dfcb4-45cxl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:17.143931 kubelet[2714]: I0813 00:51:17.143889 2714 kubelet.go:2306] "Pod admission denied" podUID="1995d990-f774-45c4-a1f1-8facd15c6b4e" pod="tigera-operator/tigera-operator-5bf8dfcb4-rhkrf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:17.345186 kubelet[2714]: I0813 00:51:17.345052 2714 kubelet.go:2306] "Pod admission denied" podUID="be209011-cd1d-4bea-b68b-715cbdb275ee" pod="tigera-operator/tigera-operator-5bf8dfcb4-wv42g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:17.444893 kubelet[2714]: I0813 00:51:17.444840 2714 kubelet.go:2306] "Pod admission denied" podUID="1008eeb0-19c8-48d3-9e6a-ae3c30d6bd2c" pod="tigera-operator/tigera-operator-5bf8dfcb4-ln7l8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:17.496048 kubelet[2714]: I0813 00:51:17.495991 2714 kubelet.go:2306] "Pod admission denied" podUID="cb8111f0-eb9a-43b8-b2f2-18afd02dc2f1" pod="tigera-operator/tigera-operator-5bf8dfcb4-b9dkp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:17.591828 kubelet[2714]: I0813 00:51:17.591779 2714 kubelet.go:2306] "Pod admission denied" podUID="14c6940c-2e56-4f12-990c-50e51baae663" pod="tigera-operator/tigera-operator-5bf8dfcb4-2b8r9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:17.808417 kubelet[2714]: I0813 00:51:17.807381 2714 kubelet.go:2306] "Pod admission denied" podUID="1c9591c2-c214-477e-b02a-c1bb998622e0" pod="tigera-operator/tigera-operator-5bf8dfcb4-6zn6t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:17.894773 kubelet[2714]: I0813 00:51:17.894721 2714 kubelet.go:2306] "Pod admission denied" podUID="b8342d60-e703-47fa-8933-ef88846d7982" pod="tigera-operator/tigera-operator-5bf8dfcb4-rx8nj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:17.942470 kubelet[2714]: I0813 00:51:17.942418 2714 kubelet.go:2306] "Pod admission denied" podUID="7b5570da-0df3-48fb-a49a-5af2a0698467" pod="tigera-operator/tigera-operator-5bf8dfcb4-9whw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:18.049699 kubelet[2714]: I0813 00:51:18.049636 2714 kubelet.go:2306] "Pod admission denied" podUID="8a1da908-c745-40b2-bf8b-ff7c5ee48751" pod="tigera-operator/tigera-operator-5bf8dfcb4-47vlh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:18.247432 kubelet[2714]: I0813 00:51:18.247371 2714 kubelet.go:2306] "Pod admission denied" podUID="ebf9f3eb-31a6-4492-a120-c4b9bcb04d05" pod="tigera-operator/tigera-operator-5bf8dfcb4-pzcr9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:18.345236 kubelet[2714]: I0813 00:51:18.345179 2714 kubelet.go:2306] "Pod admission denied" podUID="95090741-ce6c-4c60-bb60-45bfaa63e27b" pod="tigera-operator/tigera-operator-5bf8dfcb4-v9zkf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:18.442941 kubelet[2714]: I0813 00:51:18.442889 2714 kubelet.go:2306] "Pod admission denied" podUID="1cff6766-c88c-469f-9de4-53e2105b0071" pod="tigera-operator/tigera-operator-5bf8dfcb4-5wwvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:18.543328 kubelet[2714]: I0813 00:51:18.543208 2714 kubelet.go:2306] "Pod admission denied" podUID="d6883fbf-53c5-4f21-9e95-0b9e8e66b6a7" pod="tigera-operator/tigera-operator-5bf8dfcb4-g7vt4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:18.647312 kubelet[2714]: I0813 00:51:18.647257 2714 kubelet.go:2306] "Pod admission denied" podUID="65c6740e-ae2e-4f71-b494-05b105848d3f" pod="tigera-operator/tigera-operator-5bf8dfcb4-qvc99" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:18.846353 kubelet[2714]: I0813 00:51:18.846137 2714 kubelet.go:2306] "Pod admission denied" podUID="ae3434b7-3d85-475b-a146-3024bd13cf43" pod="tigera-operator/tigera-operator-5bf8dfcb4-jhfcr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:18.954256 kubelet[2714]: I0813 00:51:18.953842 2714 kubelet.go:2306] "Pod admission denied" podUID="f9430ff1-2249-4ab7-ad63-d71fbffd1553" pod="tigera-operator/tigera-operator-5bf8dfcb4-2srmh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.043578 kubelet[2714]: I0813 00:51:19.043531 2714 kubelet.go:2306] "Pod admission denied" podUID="11e6c09c-b594-42c7-9559-f060e0eb0987" pod="tigera-operator/tigera-operator-5bf8dfcb4-s8ssg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.143977 kubelet[2714]: I0813 00:51:19.143929 2714 kubelet.go:2306] "Pod admission denied" podUID="d8e9cf24-5768-453e-8365-566ef936858d" pod="tigera-operator/tigera-operator-5bf8dfcb4-xh9mf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.244615 kubelet[2714]: I0813 00:51:19.244560 2714 kubelet.go:2306] "Pod admission denied" podUID="b58ac7dd-7440-48c8-b2f4-3c82389f66e3" pod="tigera-operator/tigera-operator-5bf8dfcb4-mzg8w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.345248 kubelet[2714]: I0813 00:51:19.345191 2714 kubelet.go:2306] "Pod admission denied" podUID="594e021b-1014-42ae-b05e-0932423dda18" pod="tigera-operator/tigera-operator-5bf8dfcb4-w6mm9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.393568 kubelet[2714]: I0813 00:51:19.393508 2714 kubelet.go:2306] "Pod admission denied" podUID="0f685546-980b-4540-9bcc-443bebee1a68" pod="tigera-operator/tigera-operator-5bf8dfcb4-nbnnc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.496814 kubelet[2714]: I0813 00:51:19.496678 2714 kubelet.go:2306] "Pod admission denied" podUID="22fecc99-a66e-4e33-b27c-be078ba16503" pod="tigera-operator/tigera-operator-5bf8dfcb4-npgmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.596010 kubelet[2714]: I0813 00:51:19.595947 2714 kubelet.go:2306] "Pod admission denied" podUID="4e4761ad-40d2-4942-8ffb-a2c0d15a7d47" pod="tigera-operator/tigera-operator-5bf8dfcb4-6lszg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.645721 kubelet[2714]: I0813 00:51:19.645667 2714 kubelet.go:2306] "Pod admission denied" podUID="cdf27810-0ebf-4f7b-a930-7e35050408d9" pod="tigera-operator/tigera-operator-5bf8dfcb4-44rcf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.743970 kubelet[2714]: I0813 00:51:19.743917 2714 kubelet.go:2306] "Pod admission denied" podUID="e4d9eeb5-25a8-4d82-ada0-437a6cd7728b" pod="tigera-operator/tigera-operator-5bf8dfcb4-kgnb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.846013 kubelet[2714]: I0813 00:51:19.845388 2714 kubelet.go:2306] "Pod admission denied" podUID="1aa0e052-c554-4e33-a73d-ddaa406ec86b" pod="tigera-operator/tigera-operator-5bf8dfcb4-wfspn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:19.945671 kubelet[2714]: I0813 00:51:19.945616 2714 kubelet.go:2306] "Pod admission denied" podUID="7be3a749-2de3-4f16-b4d2-864c385ceb95" pod="tigera-operator/tigera-operator-5bf8dfcb4-9xbc5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.044421 kubelet[2714]: I0813 00:51:20.044369 2714 kubelet.go:2306] "Pod admission denied" podUID="3f6ada2d-f559-4029-b592-838f5111d85d" pod="tigera-operator/tigera-operator-5bf8dfcb4-pxwlk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.144099 kubelet[2714]: I0813 00:51:20.144047 2714 kubelet.go:2306] "Pod admission denied" podUID="b7e92c76-3e14-46cb-9e86-a5af637682da" pod="tigera-operator/tigera-operator-5bf8dfcb4-5kpjn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.243757 kubelet[2714]: I0813 00:51:20.243708 2714 kubelet.go:2306] "Pod admission denied" podUID="8833a8d0-22f7-45c2-9f77-d14ee9837391" pod="tigera-operator/tigera-operator-5bf8dfcb4-xz7cv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.295403 kubelet[2714]: I0813 00:51:20.295371 2714 kubelet.go:2306] "Pod admission denied" podUID="4c76fb48-57d3-4587-af31-75c83b8de1e3" pod="tigera-operator/tigera-operator-5bf8dfcb4-wj4nt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.393632 kubelet[2714]: I0813 00:51:20.393582 2714 kubelet.go:2306] "Pod admission denied" podUID="eb70a82e-c8b5-4ebb-98f0-487165a427d4" pod="tigera-operator/tigera-operator-5bf8dfcb4-6bqtn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.493735 kubelet[2714]: I0813 00:51:20.493186 2714 kubelet.go:2306] "Pod admission denied" podUID="19c9f291-fd20-4b2c-a5fe-200ff6690eea" pod="tigera-operator/tigera-operator-5bf8dfcb4-4q97v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.595367 kubelet[2714]: I0813 00:51:20.595318 2714 kubelet.go:2306] "Pod admission denied" podUID="06e6e682-0f45-4cce-816d-1c4614696526" pod="tigera-operator/tigera-operator-5bf8dfcb4-2p6rs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.692844 kubelet[2714]: I0813 00:51:20.692807 2714 kubelet.go:2306] "Pod admission denied" podUID="2293f205-1045-4eac-9df4-b87c8d1dbaa9" pod="tigera-operator/tigera-operator-5bf8dfcb4-47t7v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.793881 kubelet[2714]: I0813 00:51:20.793773 2714 kubelet.go:2306] "Pod admission denied" podUID="e97bb5ce-c9ce-4a04-b118-8e5bf86a563f" pod="tigera-operator/tigera-operator-5bf8dfcb4-q2rtc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.892538 kubelet[2714]: I0813 00:51:20.892481 2714 kubelet.go:2306] "Pod admission denied" podUID="2e360aa0-1e48-4272-86f7-067b02b5992e" pod="tigera-operator/tigera-operator-5bf8dfcb4-5dnd8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:20.956036 containerd[1575]: time="2025-08-13T00:51:20.955849617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:51:20.956036 containerd[1575]: time="2025-08-13T00:51:20.955893027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:51:21.017669 containerd[1575]: time="2025-08-13T00:51:21.017625733Z" level=error msg="Failed to destroy network for sandbox \"7cdf6bd7411ea3e4f8329d28a7c896086215af426631243dc88957dc7f7e26ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:21.019845 containerd[1575]: time="2025-08-13T00:51:21.019778643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdf6bd7411ea3e4f8329d28a7c896086215af426631243dc88957dc7f7e26ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:21.021082 systemd[1]: run-netns-cni\x2d0f147795\x2df63f\x2d31d4\x2d2756\x2d7483c75c5412.mount: Deactivated successfully. Aug 13 00:51:21.027918 kubelet[2714]: I0813 00:51:21.026882 2714 kubelet.go:2306] "Pod admission denied" podUID="9868d3af-f555-4ee1-adcc-e9d2f94d50d3" pod="tigera-operator/tigera-operator-5bf8dfcb4-gcx9t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:21.028327 kubelet[2714]: E0813 00:51:21.028305 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdf6bd7411ea3e4f8329d28a7c896086215af426631243dc88957dc7f7e26ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:21.028571 kubelet[2714]: E0813 00:51:21.028474 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdf6bd7411ea3e4f8329d28a7c896086215af426631243dc88957dc7f7e26ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:21.029199 kubelet[2714]: E0813 00:51:21.029179 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdf6bd7411ea3e4f8329d28a7c896086215af426631243dc88957dc7f7e26ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:21.029303 kubelet[2714]: E0813 00:51:21.029278 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cdf6bd7411ea3e4f8329d28a7c896086215af426631243dc88957dc7f7e26ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:51:21.043682 containerd[1575]: time="2025-08-13T00:51:21.041614899Z" level=error msg="Failed to destroy network for sandbox \"4cdaa491d3e5e99b011a26bc7b396677fc6c50fe55460583b93d67a33b26bf2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:21.043443 systemd[1]: run-netns-cni\x2d4ba44918\x2dfae3\x2df4f4\x2df992\x2d6c5e152e57ad.mount: Deactivated successfully. Aug 13 00:51:21.046718 containerd[1575]: time="2025-08-13T00:51:21.046579570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cdaa491d3e5e99b011a26bc7b396677fc6c50fe55460583b93d67a33b26bf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:21.046907 kubelet[2714]: E0813 00:51:21.046880 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cdaa491d3e5e99b011a26bc7b396677fc6c50fe55460583b93d67a33b26bf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:21.048768 kubelet[2714]: E0813 00:51:21.048558 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cdaa491d3e5e99b011a26bc7b396677fc6c50fe55460583b93d67a33b26bf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:21.048768 kubelet[2714]: E0813 00:51:21.048582 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cdaa491d3e5e99b011a26bc7b396677fc6c50fe55460583b93d67a33b26bf2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:21.048768 kubelet[2714]: E0813 00:51:21.048616 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cdaa491d3e5e99b011a26bc7b396677fc6c50fe55460583b93d67a33b26bf2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:51:21.093242 kubelet[2714]: I0813 00:51:21.093202 2714 kubelet.go:2306] "Pod admission denied" podUID="6786387d-b754-481c-a6e4-65af8d6b94c1" pod="tigera-operator/tigera-operator-5bf8dfcb4-dbh84" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:21.196959 kubelet[2714]: I0813 00:51:21.196919 2714 kubelet.go:2306] "Pod admission denied" podUID="176681b3-924b-4261-9b69-4ca8c2ce62cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-gbdvc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:21.292882 kubelet[2714]: I0813 00:51:21.292834 2714 kubelet.go:2306] "Pod admission denied" podUID="c0aa0685-a564-400e-b0c0-20651e9558f8" pod="tigera-operator/tigera-operator-5bf8dfcb4-6c8cf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:21.396065 kubelet[2714]: I0813 00:51:21.396024 2714 kubelet.go:2306] "Pod admission denied" podUID="df83beef-52e2-446d-bdd8-81ab6d164457" pod="tigera-operator/tigera-operator-5bf8dfcb4-w68sz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:21.496034 kubelet[2714]: I0813 00:51:21.495971 2714 kubelet.go:2306] "Pod admission denied" podUID="d9710490-9988-4ff0-9ce2-06c0d1484167" pod="tigera-operator/tigera-operator-5bf8dfcb4-n44d2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:21.542353 kubelet[2714]: I0813 00:51:21.542310 2714 kubelet.go:2306] "Pod admission denied" podUID="3c552491-18cb-4ff8-b73a-c9667b67b90d" pod="tigera-operator/tigera-operator-5bf8dfcb4-r7bqv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:21.653931 kubelet[2714]: I0813 00:51:21.652892 2714 kubelet.go:2306] "Pod admission denied" podUID="138d15e8-c33a-47c1-b7d3-1bbb0fd248f1" pod="tigera-operator/tigera-operator-5bf8dfcb4-8wm2l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:21.743729 kubelet[2714]: I0813 00:51:21.743688 2714 kubelet.go:2306] "Pod admission denied" podUID="e0167ada-0bb0-479e-9964-f83725e0aa53" pod="tigera-operator/tigera-operator-5bf8dfcb4-8897w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:21.844433 kubelet[2714]: I0813 00:51:21.844390 2714 kubelet.go:2306] "Pod admission denied" podUID="ddf42623-0d4a-4071-8ecc-cc9311683854" pod="tigera-operator/tigera-operator-5bf8dfcb4-22sn5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:21.945567 kubelet[2714]: I0813 00:51:21.945133 2714 kubelet.go:2306] "Pod admission denied" podUID="40900486-6aa5-400a-a068-aba4c2e2e3c4" pod="tigera-operator/tigera-operator-5bf8dfcb4-92mnt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:22.043563 kubelet[2714]: I0813 00:51:22.043510 2714 kubelet.go:2306] "Pod admission denied" podUID="be427833-27af-4f45-ac93-4f39389ad7f8" pod="tigera-operator/tigera-operator-5bf8dfcb4-kcv96" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:22.145001 kubelet[2714]: I0813 00:51:22.144943 2714 kubelet.go:2306] "Pod admission denied" podUID="a8393587-bfbb-4e7d-8419-a52ccbcf5149" pod="tigera-operator/tigera-operator-5bf8dfcb4-bwx6b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:22.245602 kubelet[2714]: I0813 00:51:22.245138 2714 kubelet.go:2306] "Pod admission denied" podUID="6b12d6f1-aead-4f01-92ef-123f0132a052" pod="tigera-operator/tigera-operator-5bf8dfcb4-rwckg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:22.454546 kubelet[2714]: I0813 00:51:22.453993 2714 kubelet.go:2306] "Pod admission denied" podUID="36590049-64cf-4588-b9b0-5598daf815a2" pod="tigera-operator/tigera-operator-5bf8dfcb4-bkq9g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:22.559973 kubelet[2714]: I0813 00:51:22.559873 2714 kubelet.go:2306] "Pod admission denied" podUID="fe6c7eca-8c7d-410b-bc3a-5407d9335fa9" pod="tigera-operator/tigera-operator-5bf8dfcb4-p5gtq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:22.642960 kubelet[2714]: I0813 00:51:22.642910 2714 kubelet.go:2306] "Pod admission denied" podUID="c94346ce-89f4-4824-aff8-12fc5ba54518" pod="tigera-operator/tigera-operator-5bf8dfcb4-ctxkc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:22.744442 kubelet[2714]: I0813 00:51:22.744404 2714 kubelet.go:2306] "Pod admission denied" podUID="037e4691-43b2-457d-a585-575e6d8974b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-mwv4j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:22.845559 kubelet[2714]: I0813 00:51:22.844686 2714 kubelet.go:2306] "Pod admission denied" podUID="611a7c7e-eea5-4578-818f-9586d859e587" pod="tigera-operator/tigera-operator-5bf8dfcb4-cffrk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:22.946072 kubelet[2714]: I0813 00:51:22.946020 2714 kubelet.go:2306] "Pod admission denied" podUID="e68f5a4d-313e-4bda-a3f1-511de9b2b40f" pod="tigera-operator/tigera-operator-5bf8dfcb4-cx28d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:22.956533 kubelet[2714]: E0813 00:51:22.956328 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:51:23.054548 kubelet[2714]: I0813 00:51:23.053835 2714 kubelet.go:2306] "Pod admission denied" podUID="1843f018-1b2a-48c8-9787-322a7fba2461" pod="tigera-operator/tigera-operator-5bf8dfcb4-wdw84" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.145379 kubelet[2714]: I0813 00:51:23.145334 2714 kubelet.go:2306] "Pod admission denied" podUID="37c928f4-5ae4-4351-80ec-6e6a82fb2b07" pod="tigera-operator/tigera-operator-5bf8dfcb4-7pb4h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.198215 kubelet[2714]: I0813 00:51:23.198166 2714 kubelet.go:2306] "Pod admission denied" podUID="cfde9117-1e3a-4ec2-9a54-7b4474e1310d" pod="tigera-operator/tigera-operator-5bf8dfcb4-5trxz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.304452 kubelet[2714]: I0813 00:51:23.304385 2714 kubelet.go:2306] "Pod admission denied" podUID="c7d6847f-32d2-484d-9495-ed3ff88cbbc7" pod="tigera-operator/tigera-operator-5bf8dfcb4-kgckj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.398007 kubelet[2714]: I0813 00:51:23.397717 2714 kubelet.go:2306] "Pod admission denied" podUID="a6ebe804-7f34-40df-8e5e-9b18208b4b30" pod="tigera-operator/tigera-operator-5bf8dfcb4-zn8qm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.496405 kubelet[2714]: I0813 00:51:23.496353 2714 kubelet.go:2306] "Pod admission denied" podUID="f90450fc-a908-4354-9385-6ea34719b9cf" pod="tigera-operator/tigera-operator-5bf8dfcb4-v94ts" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.557201 kubelet[2714]: I0813 00:51:23.557169 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:23.557201 kubelet[2714]: I0813 00:51:23.557207 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:51:23.558869 kubelet[2714]: I0813 00:51:23.558589 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:23.569469 kubelet[2714]: I0813 00:51:23.569441 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:23.569600 kubelet[2714]: I0813 00:51:23.569570 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","calico-system/csi-node-driver-mmxc6","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:51:23.569680 kubelet[2714]: E0813 00:51:23.569602 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:23.569680 kubelet[2714]: E0813 00:51:23.569617 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:23.569680 kubelet[2714]: E0813 00:51:23.569626 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:23.569680 kubelet[2714]: E0813 00:51:23.569633 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:51:23.569680 kubelet[2714]: E0813 00:51:23.569639 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:23.569680 kubelet[2714]: E0813 00:51:23.569653 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:51:23.569680 kubelet[2714]: E0813 00:51:23.569663 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:51:23.569680 kubelet[2714]: E0813 00:51:23.569675 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:51:23.569680 kubelet[2714]: E0813 00:51:23.569684 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:51:23.569857 kubelet[2714]: E0813 00:51:23.569693 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:51:23.569857 kubelet[2714]: I0813 00:51:23.569703 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:23.595050 kubelet[2714]: I0813 00:51:23.595001 2714 kubelet.go:2306] "Pod admission denied" podUID="48070c51-6a7c-45ca-89a0-1f4b9c7be4dd" pod="tigera-operator/tigera-operator-5bf8dfcb4-84zth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.646780 kubelet[2714]: I0813 00:51:23.646727 2714 kubelet.go:2306] "Pod admission denied" podUID="33496d2f-7c06-4259-b4c0-71e53eab26b9" pod="tigera-operator/tigera-operator-5bf8dfcb4-klkg6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.745563 kubelet[2714]: I0813 00:51:23.745066 2714 kubelet.go:2306] "Pod admission denied" podUID="27ae8d90-61b6-4c8f-919d-bbc9cf0dc99f" pod="tigera-operator/tigera-operator-5bf8dfcb4-hgw4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.845118 kubelet[2714]: I0813 00:51:23.845069 2714 kubelet.go:2306] "Pod admission denied" podUID="1e898e51-237d-4d69-b7b8-c26ab1031929" pod="tigera-operator/tigera-operator-5bf8dfcb4-5g8jq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.949298 kubelet[2714]: I0813 00:51:23.949249 2714 kubelet.go:2306] "Pod admission denied" podUID="0e17ca79-51fc-4283-89b9-ee9e842b2238" pod="tigera-operator/tigera-operator-5bf8dfcb4-2jpmk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:23.957464 kubelet[2714]: E0813 00:51:23.957361 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:23.959343 containerd[1575]: time="2025-08-13T00:51:23.959133135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:24.015066 containerd[1575]: time="2025-08-13T00:51:24.014882718Z" level=error msg="Failed to destroy network for sandbox \"2213d7ef8a536f6248d8147f4f16cd52dc2b6c58d5c5729497a654c460a751b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:24.016368 containerd[1575]: time="2025-08-13T00:51:24.016317674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2213d7ef8a536f6248d8147f4f16cd52dc2b6c58d5c5729497a654c460a751b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:24.016983 kubelet[2714]: E0813 00:51:24.016692 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2213d7ef8a536f6248d8147f4f16cd52dc2b6c58d5c5729497a654c460a751b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:24.016983 kubelet[2714]: E0813 00:51:24.016742 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2213d7ef8a536f6248d8147f4f16cd52dc2b6c58d5c5729497a654c460a751b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:24.016983 kubelet[2714]: E0813 00:51:24.016761 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2213d7ef8a536f6248d8147f4f16cd52dc2b6c58d5c5729497a654c460a751b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:24.016983 kubelet[2714]: E0813 00:51:24.016797 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2213d7ef8a536f6248d8147f4f16cd52dc2b6c58d5c5729497a654c460a751b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:51:24.018142 systemd[1]: run-netns-cni\x2dfd5139d5\x2d6fe8\x2d7627\x2d5acb\x2de764d01786f7.mount: Deactivated successfully. Aug 13 00:51:24.046571 kubelet[2714]: I0813 00:51:24.046512 2714 kubelet.go:2306] "Pod admission denied" podUID="0e67ad18-7614-484c-b8ed-d6e8f6de1ac7" pod="tigera-operator/tigera-operator-5bf8dfcb4-s59p6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:24.097310 kubelet[2714]: I0813 00:51:24.097252 2714 kubelet.go:2306] "Pod admission denied" podUID="d7ef5cfb-b000-4878-a6b6-ad0c5485cc9b" pod="tigera-operator/tigera-operator-5bf8dfcb4-2nzhc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:24.194622 kubelet[2714]: I0813 00:51:24.194573 2714 kubelet.go:2306] "Pod admission denied" podUID="022f2826-1fdd-437e-ac12-9f0fbe966fb8" pod="tigera-operator/tigera-operator-5bf8dfcb4-pdmh4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:24.397104 kubelet[2714]: I0813 00:51:24.397048 2714 kubelet.go:2306] "Pod admission denied" podUID="80d550e4-0c1d-4f11-860e-941a79a8d0d2" pod="tigera-operator/tigera-operator-5bf8dfcb4-2rb7v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:24.495192 kubelet[2714]: I0813 00:51:24.495135 2714 kubelet.go:2306] "Pod admission denied" podUID="b7d7208d-d4ab-4d1f-bd06-4bdc756024db" pod="tigera-operator/tigera-operator-5bf8dfcb4-5b5pd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:24.594232 kubelet[2714]: I0813 00:51:24.594178 2714 kubelet.go:2306] "Pod admission denied" podUID="a70d5678-2eb7-45f2-84e4-3f59613bebe2" pod="tigera-operator/tigera-operator-5bf8dfcb4-4ldsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:24.695096 kubelet[2714]: I0813 00:51:24.694975 2714 kubelet.go:2306] "Pod admission denied" podUID="957560a4-d469-4628-9891-2501a0a4b919" pod="tigera-operator/tigera-operator-5bf8dfcb4-5dbnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:24.794306 kubelet[2714]: I0813 00:51:24.794246 2714 kubelet.go:2306] "Pod admission denied" podUID="e0b6b196-0b14-4965-8942-12e1438a2e3b" pod="tigera-operator/tigera-operator-5bf8dfcb4-j7wr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:24.995309 kubelet[2714]: I0813 00:51:24.995008 2714 kubelet.go:2306] "Pod admission denied" podUID="d8ba75ba-77ec-4d01-9240-2607038c0e08" pod="tigera-operator/tigera-operator-5bf8dfcb4-shvgz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:25.097861 kubelet[2714]: I0813 00:51:25.097809 2714 kubelet.go:2306] "Pod admission denied" podUID="f2e48999-c9f9-4f9c-8ea8-8fbafa5cba8a" pod="tigera-operator/tigera-operator-5bf8dfcb4-mp5vz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:25.197459 kubelet[2714]: I0813 00:51:25.197381 2714 kubelet.go:2306] "Pod admission denied" podUID="44fbc4ce-4591-44d5-92d2-894b6def0be5" pod="tigera-operator/tigera-operator-5bf8dfcb4-mrp8z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:25.396185 kubelet[2714]: I0813 00:51:25.396138 2714 kubelet.go:2306] "Pod admission denied" podUID="936ecaf9-321f-41cc-9664-435a243a637a" pod="tigera-operator/tigera-operator-5bf8dfcb4-rcnln" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:25.492565 kubelet[2714]: I0813 00:51:25.492506 2714 kubelet.go:2306] "Pod admission denied" podUID="a9b039e8-8ad3-4b39-bfa1-7bb629f5662b" pod="tigera-operator/tigera-operator-5bf8dfcb4-5vmbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:25.594155 kubelet[2714]: I0813 00:51:25.594088 2714 kubelet.go:2306] "Pod admission denied" podUID="2dd13bbc-47da-488c-9abb-d781378cd2f7" pod="tigera-operator/tigera-operator-5bf8dfcb4-gxg6m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:25.793545 kubelet[2714]: I0813 00:51:25.793427 2714 kubelet.go:2306] "Pod admission denied" podUID="376c8e10-87f8-4ab8-b0af-4bd689c8f73b" pod="tigera-operator/tigera-operator-5bf8dfcb4-lz4z5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:25.894761 kubelet[2714]: I0813 00:51:25.894723 2714 kubelet.go:2306] "Pod admission denied" podUID="1fabbccb-bfa9-4a94-a0e9-f0b940b17b82" pod="tigera-operator/tigera-operator-5bf8dfcb4-czpgl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:25.945336 kubelet[2714]: I0813 00:51:25.945287 2714 kubelet.go:2306] "Pod admission denied" podUID="b95b23fb-14ab-4586-ba65-edc9afc27165" pod="tigera-operator/tigera-operator-5bf8dfcb4-lgw8f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:26.045390 kubelet[2714]: I0813 00:51:26.045264 2714 kubelet.go:2306] "Pod admission denied" podUID="9bff6501-be91-48d4-8a3d-ad3e0fd7ee74" pod="tigera-operator/tigera-operator-5bf8dfcb4-8p549" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:26.143042 kubelet[2714]: I0813 00:51:26.142995 2714 kubelet.go:2306] "Pod admission denied" podUID="164246f0-cd36-4d37-952b-22d523a50384" pod="tigera-operator/tigera-operator-5bf8dfcb4-sjcc5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:26.243638 kubelet[2714]: I0813 00:51:26.243598 2714 kubelet.go:2306] "Pod admission denied" podUID="b5843d2f-a8ca-48d5-99e5-52ccb9723a4a" pod="tigera-operator/tigera-operator-5bf8dfcb4-l8bbx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:26.449142 kubelet[2714]: I0813 00:51:26.449089 2714 kubelet.go:2306] "Pod admission denied" podUID="96b544ae-1f75-488c-85e2-ffca037d06d8" pod="tigera-operator/tigera-operator-5bf8dfcb4-92txr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:26.544853 kubelet[2714]: I0813 00:51:26.544811 2714 kubelet.go:2306] "Pod admission denied" podUID="d7e2e1c2-33d1-435a-b85c-385357ae8469" pod="tigera-operator/tigera-operator-5bf8dfcb4-vwm4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:26.645678 kubelet[2714]: I0813 00:51:26.645634 2714 kubelet.go:2306] "Pod admission denied" podUID="69d044dc-f888-4aec-aad8-127385402e7b" pod="tigera-operator/tigera-operator-5bf8dfcb4-2rbbg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:26.844943 kubelet[2714]: I0813 00:51:26.844705 2714 kubelet.go:2306] "Pod admission denied" podUID="1418fd1c-c22c-451c-ad39-cc8d8a56dec5" pod="tigera-operator/tigera-operator-5bf8dfcb4-qmfxn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:26.944326 kubelet[2714]: I0813 00:51:26.944262 2714 kubelet.go:2306] "Pod admission denied" podUID="b7cb1210-9248-4fea-8ca9-b1703be89df2" pod="tigera-operator/tigera-operator-5bf8dfcb4-xxqbv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:26.956544 kubelet[2714]: E0813 00:51:26.955860 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:26.958878 containerd[1575]: time="2025-08-13T00:51:26.958840115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:27.027592 containerd[1575]: time="2025-08-13T00:51:27.027541833Z" level=error msg="Failed to destroy network for sandbox \"0a3ca83f88e4934ccfb9052952a54c11f0bcaa4c94593b2595c9250b46015a19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:27.031042 systemd[1]: run-netns-cni\x2defad1eb7\x2da8e2\x2d9ec1\x2d5282\x2d8cb39f3502a1.mount: Deactivated successfully. Aug 13 00:51:27.032552 containerd[1575]: time="2025-08-13T00:51:27.032419442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a3ca83f88e4934ccfb9052952a54c11f0bcaa4c94593b2595c9250b46015a19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:27.032788 kubelet[2714]: E0813 00:51:27.032719 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a3ca83f88e4934ccfb9052952a54c11f0bcaa4c94593b2595c9250b46015a19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:27.032788 kubelet[2714]: E0813 00:51:27.032774 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a3ca83f88e4934ccfb9052952a54c11f0bcaa4c94593b2595c9250b46015a19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:27.032930 kubelet[2714]: E0813 00:51:27.032792 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a3ca83f88e4934ccfb9052952a54c11f0bcaa4c94593b2595c9250b46015a19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:27.032930 kubelet[2714]: E0813 00:51:27.032828 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a3ca83f88e4934ccfb9052952a54c11f0bcaa4c94593b2595c9250b46015a19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:51:27.048678 kubelet[2714]: I0813 00:51:27.048642 2714 kubelet.go:2306] "Pod admission denied" podUID="43c524ed-dd7c-4044-ae79-69a5b584ac17" pod="tigera-operator/tigera-operator-5bf8dfcb4-nksqn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:27.150179 kubelet[2714]: I0813 00:51:27.150130 2714 kubelet.go:2306] "Pod admission denied" podUID="eae12d48-5eb8-4bec-970b-788c6ebb09e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-n7svk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:27.245373 kubelet[2714]: I0813 00:51:27.245323 2714 kubelet.go:2306] "Pod admission denied" podUID="c6d32e35-5dac-41e6-acfb-d7d5b9df3fe9" pod="tigera-operator/tigera-operator-5bf8dfcb4-t7vvd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:27.346807 kubelet[2714]: I0813 00:51:27.346756 2714 kubelet.go:2306] "Pod admission denied" podUID="60d42431-0451-48c3-a327-a39708ade3e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-8dqxj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:27.393276 kubelet[2714]: I0813 00:51:27.393231 2714 kubelet.go:2306] "Pod admission denied" podUID="047f602a-72d6-48c6-9f18-72e558f842c8" pod="tigera-operator/tigera-operator-5bf8dfcb4-v8zjk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:27.495745 kubelet[2714]: I0813 00:51:27.495629 2714 kubelet.go:2306] "Pod admission denied" podUID="f63d10d6-0bb1-42bd-840b-cf94e7739871" pod="tigera-operator/tigera-operator-5bf8dfcb4-dnbnm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:27.695991 kubelet[2714]: I0813 00:51:27.695941 2714 kubelet.go:2306] "Pod admission denied" podUID="0363f1f4-3262-4055-a7c0-34950af6b473" pod="tigera-operator/tigera-operator-5bf8dfcb4-6zlwx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:27.796467 kubelet[2714]: I0813 00:51:27.796253 2714 kubelet.go:2306] "Pod admission denied" podUID="824c9a2b-56bb-4e3a-98e7-5eac7b8a7b4b" pod="tigera-operator/tigera-operator-5bf8dfcb4-sswg7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:27.894916 kubelet[2714]: I0813 00:51:27.894863 2714 kubelet.go:2306] "Pod admission denied" podUID="bbfb9800-3a14-4a35-b634-eff372a333b2" pod="tigera-operator/tigera-operator-5bf8dfcb4-25hhx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:27.996317 kubelet[2714]: I0813 00:51:27.996266 2714 kubelet.go:2306] "Pod admission denied" podUID="389921af-d8b7-4b40-a8ea-5ad319a741d6" pod="tigera-operator/tigera-operator-5bf8dfcb4-ghwcm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:28.048487 kubelet[2714]: I0813 00:51:28.047166 2714 kubelet.go:2306] "Pod admission denied" podUID="7ec7d782-268a-48a7-a26d-8c3eecf11a0c" pod="tigera-operator/tigera-operator-5bf8dfcb4-8p5mx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:28.147544 kubelet[2714]: I0813 00:51:28.147327 2714 kubelet.go:2306] "Pod admission denied" podUID="1173d7d7-87bf-4aad-8cff-70e20b23e38d" pod="tigera-operator/tigera-operator-5bf8dfcb4-qbvhp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:28.246241 kubelet[2714]: I0813 00:51:28.246189 2714 kubelet.go:2306] "Pod admission denied" podUID="da6c74a4-9034-458c-b65c-65a951bc4873" pod="tigera-operator/tigera-operator-5bf8dfcb4-ntg7h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:28.348306 kubelet[2714]: I0813 00:51:28.348186 2714 kubelet.go:2306] "Pod admission denied" podUID="f4e523c0-3f07-442a-babd-0453351580bb" pod="tigera-operator/tigera-operator-5bf8dfcb4-dzsld" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:28.453538 kubelet[2714]: I0813 00:51:28.453365 2714 kubelet.go:2306] "Pod admission denied" podUID="e3056bac-721a-49c1-bf37-8502bb1eb7ef" pod="tigera-operator/tigera-operator-5bf8dfcb4-jkvx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:28.546359 kubelet[2714]: I0813 00:51:28.546305 2714 kubelet.go:2306] "Pod admission denied" podUID="9f9f6b1b-df4a-42a2-bff7-22bebb08f676" pod="tigera-operator/tigera-operator-5bf8dfcb4-r44rk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:28.646185 kubelet[2714]: I0813 00:51:28.646134 2714 kubelet.go:2306] "Pod admission denied" podUID="efa1446d-4541-4ffd-9e9d-414f87a352a0" pod="tigera-operator/tigera-operator-5bf8dfcb4-lswf7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:28.746098 kubelet[2714]: I0813 00:51:28.746037 2714 kubelet.go:2306] "Pod admission denied" podUID="8e06b8c1-d065-4ffd-9ba5-da1afead16cb" pod="tigera-operator/tigera-operator-5bf8dfcb4-z5sf6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:28.948358 kubelet[2714]: I0813 00:51:28.948233 2714 kubelet.go:2306] "Pod admission denied" podUID="2bee1dfb-e135-4955-a00a-5da3fbe6a901" pod="tigera-operator/tigera-operator-5bf8dfcb4-frlg7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:28.957757 kubelet[2714]: E0813 00:51:28.957718 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:29.049032 kubelet[2714]: I0813 00:51:29.048981 2714 kubelet.go:2306] "Pod admission denied" podUID="b732d91a-9463-4a08-8765-b8909484aef8" pod="tigera-operator/tigera-operator-5bf8dfcb4-799bp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:29.104540 kubelet[2714]: I0813 00:51:29.104187 2714 kubelet.go:2306] "Pod admission denied" podUID="276406d8-21b1-4efb-8dea-1c7f02117006" pod="tigera-operator/tigera-operator-5bf8dfcb4-7m6bp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:29.195419 kubelet[2714]: I0813 00:51:29.195369 2714 kubelet.go:2306] "Pod admission denied" podUID="ed631657-78b6-48b8-bdcb-5585ccb67ef1" pod="tigera-operator/tigera-operator-5bf8dfcb4-jcjpn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:29.297097 kubelet[2714]: I0813 00:51:29.296985 2714 kubelet.go:2306] "Pod admission denied" podUID="0cad3095-d2c1-4c25-b5f7-63e62d52f94b" pod="tigera-operator/tigera-operator-5bf8dfcb4-mblw9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:29.409543 kubelet[2714]: I0813 00:51:29.408852 2714 kubelet.go:2306] "Pod admission denied" podUID="fbbb1039-ebf3-48f1-917b-62e619d068ba" pod="tigera-operator/tigera-operator-5bf8dfcb4-x7wk2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:29.497460 kubelet[2714]: I0813 00:51:29.497410 2714 kubelet.go:2306] "Pod admission denied" podUID="609e260b-d35e-4c99-9798-00f114e02770" pod="tigera-operator/tigera-operator-5bf8dfcb4-tmcwz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:29.598195 kubelet[2714]: I0813 00:51:29.598071 2714 kubelet.go:2306] "Pod admission denied" podUID="69b27382-f230-48f0-8d6c-d4810416a232" pod="tigera-operator/tigera-operator-5bf8dfcb4-kzccb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:29.703545 kubelet[2714]: I0813 00:51:29.702817 2714 kubelet.go:2306] "Pod admission denied" podUID="c4ee14e6-6528-4669-a5b9-edea306122d9" pod="tigera-operator/tigera-operator-5bf8dfcb4-nn8cn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:29.798790 kubelet[2714]: I0813 00:51:29.798739 2714 kubelet.go:2306] "Pod admission denied" podUID="77b75684-fbf2-4d75-bb88-e6601cb9cfff" pod="tigera-operator/tigera-operator-5bf8dfcb4-jpnns" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:29.999785 kubelet[2714]: I0813 00:51:29.999723 2714 kubelet.go:2306] "Pod admission denied" podUID="d0d0daf9-1c20-482b-896b-5ff1a1338f6e" pod="tigera-operator/tigera-operator-5bf8dfcb4-v2fl7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:30.109542 kubelet[2714]: I0813 00:51:30.108785 2714 kubelet.go:2306] "Pod admission denied" podUID="292616f2-590e-45b1-abf0-1d4f1cf8b095" pod="tigera-operator/tigera-operator-5bf8dfcb4-648bx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:30.196537 kubelet[2714]: I0813 00:51:30.196465 2714 kubelet.go:2306] "Pod admission denied" podUID="9ebad958-dcbc-406c-adf1-e987f27240ec" pod="tigera-operator/tigera-operator-5bf8dfcb4-tm2z7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:30.397149 kubelet[2714]: I0813 00:51:30.397105 2714 kubelet.go:2306] "Pod admission denied" podUID="86a28d72-772f-4828-9ef1-536f7eb3d308" pod="tigera-operator/tigera-operator-5bf8dfcb4-jkqbw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:30.502542 kubelet[2714]: I0813 00:51:30.501941 2714 kubelet.go:2306] "Pod admission denied" podUID="66f1d03b-5c90-45fe-96c3-a4ad22de22bb" pod="tigera-operator/tigera-operator-5bf8dfcb4-75lgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:30.595802 kubelet[2714]: I0813 00:51:30.595745 2714 kubelet.go:2306] "Pod admission denied" podUID="e8a50794-4db1-4524-be09-af648f6b9d0f" pod="tigera-operator/tigera-operator-5bf8dfcb4-pbt6k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:30.697278 kubelet[2714]: I0813 00:51:30.696899 2714 kubelet.go:2306] "Pod admission denied" podUID="25ab36aa-b13d-4237-bab7-4e27f5816b2c" pod="tigera-operator/tigera-operator-5bf8dfcb4-6qh69" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:30.801844 kubelet[2714]: I0813 00:51:30.801807 2714 kubelet.go:2306] "Pod admission denied" podUID="7fabddc9-2219-4d1a-b58f-b71b9f8a4a5f" pod="tigera-operator/tigera-operator-5bf8dfcb4-2m4qs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:30.896186 kubelet[2714]: I0813 00:51:30.896140 2714 kubelet.go:2306] "Pod admission denied" podUID="dfe42e74-bd60-4b60-a005-d5234a47b29e" pod="tigera-operator/tigera-operator-5bf8dfcb4-rfjkf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:30.943244 kubelet[2714]: I0813 00:51:30.943212 2714 kubelet.go:2306] "Pod admission denied" podUID="eee5014a-2330-44e9-8cb7-b6be7d4ab9c2" pod="tigera-operator/tigera-operator-5bf8dfcb4-grfvb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:31.052702 kubelet[2714]: I0813 00:51:31.051819 2714 kubelet.go:2306] "Pod admission denied" podUID="759b1d99-b170-466a-b9a0-bc3253d5507b" pod="tigera-operator/tigera-operator-5bf8dfcb4-qkgbv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:31.149307 kubelet[2714]: I0813 00:51:31.149251 2714 kubelet.go:2306] "Pod admission denied" podUID="66daacbd-9ef3-4cd5-9071-0fd5c7386ab2" pod="tigera-operator/tigera-operator-5bf8dfcb4-5k42n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:31.253711 kubelet[2714]: I0813 00:51:31.253666 2714 kubelet.go:2306] "Pod admission denied" podUID="fae4e41e-f9a9-4352-bb88-2a2127a54f2f" pod="tigera-operator/tigera-operator-5bf8dfcb4-ddg87" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:31.459172 kubelet[2714]: I0813 00:51:31.458707 2714 kubelet.go:2306] "Pod admission denied" podUID="afe77ba7-4fa5-42ec-b64f-a2e5aa28593e" pod="tigera-operator/tigera-operator-5bf8dfcb4-k6djw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:31.544593 kubelet[2714]: I0813 00:51:31.544555 2714 kubelet.go:2306] "Pod admission denied" podUID="616dcc82-c156-46e8-b29f-54ac94d2eb00" pod="tigera-operator/tigera-operator-5bf8dfcb4-jfdp9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:31.645008 kubelet[2714]: I0813 00:51:31.644963 2714 kubelet.go:2306] "Pod admission denied" podUID="05e1f477-4e18-496b-b6d9-8d9756f1cb37" pod="tigera-operator/tigera-operator-5bf8dfcb4-x8h6w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:31.756616 kubelet[2714]: I0813 00:51:31.755805 2714 kubelet.go:2306] "Pod admission denied" podUID="bd29013c-6226-4015-af74-14978702fa76" pod="tigera-operator/tigera-operator-5bf8dfcb4-mhbm5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:31.846578 kubelet[2714]: I0813 00:51:31.846532 2714 kubelet.go:2306] "Pod admission denied" podUID="2a61dd4c-e9c0-43b6-af96-27b05abf87ae" pod="tigera-operator/tigera-operator-5bf8dfcb4-cm2lc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:31.949901 kubelet[2714]: I0813 00:51:31.949847 2714 kubelet.go:2306] "Pod admission denied" podUID="ddb3f247-cddd-455c-a8b7-720406ea25ea" pod="tigera-operator/tigera-operator-5bf8dfcb4-k42cs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.005547 kubelet[2714]: I0813 00:51:32.004496 2714 kubelet.go:2306] "Pod admission denied" podUID="0cf10b76-2e2f-4fe2-8baa-a7b68ef56e43" pod="tigera-operator/tigera-operator-5bf8dfcb4-qn55n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.099733 kubelet[2714]: I0813 00:51:32.099623 2714 kubelet.go:2306] "Pod admission denied" podUID="ffac0b44-70de-42e0-8759-df7409ca1f31" pod="tigera-operator/tigera-operator-5bf8dfcb4-tflfk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.195790 kubelet[2714]: I0813 00:51:32.195739 2714 kubelet.go:2306] "Pod admission denied" podUID="c6cb53f8-3860-4597-a639-7466cfe6a70b" pod="tigera-operator/tigera-operator-5bf8dfcb4-6ck2v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.256379 kubelet[2714]: I0813 00:51:32.256335 2714 kubelet.go:2306] "Pod admission denied" podUID="80e7bf2b-297c-4c37-939c-79937ba4cab1" pod="tigera-operator/tigera-operator-5bf8dfcb4-565zk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.350480 kubelet[2714]: I0813 00:51:32.349826 2714 kubelet.go:2306] "Pod admission denied" podUID="7f128ddf-9f13-4721-baee-338eb2d21b10" pod="tigera-operator/tigera-operator-5bf8dfcb4-7nvp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.448937 kubelet[2714]: I0813 00:51:32.448892 2714 kubelet.go:2306] "Pod admission denied" podUID="a4c7ee95-dda8-4383-a3b4-3606f7c640d3" pod="tigera-operator/tigera-operator-5bf8dfcb4-cjcns" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.556193 kubelet[2714]: I0813 00:51:32.555435 2714 kubelet.go:2306] "Pod admission denied" podUID="65b8bb9f-e30f-44da-9610-41119e7beebb" pod="tigera-operator/tigera-operator-5bf8dfcb4-zfqvd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.749505 kubelet[2714]: I0813 00:51:32.749464 2714 kubelet.go:2306] "Pod admission denied" podUID="2a43ed8f-8ee3-4734-b1d6-d52b387e1b6d" pod="tigera-operator/tigera-operator-5bf8dfcb4-b97x9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.847308 kubelet[2714]: I0813 00:51:32.847263 2714 kubelet.go:2306] "Pod admission denied" podUID="3491f96e-307f-4617-afc9-2648af3a2fe1" pod="tigera-operator/tigera-operator-5bf8dfcb4-pkjl4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.902959 kubelet[2714]: I0813 00:51:32.902141 2714 kubelet.go:2306] "Pod admission denied" podUID="9c39cc52-a05a-4303-a32a-7f2cd6a332aa" pod="tigera-operator/tigera-operator-5bf8dfcb4-5zw5f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:32.994805 kubelet[2714]: I0813 00:51:32.994757 2714 kubelet.go:2306] "Pod admission denied" podUID="a4b1e0a4-72cf-48f7-a6a5-64fc01212b2f" pod="tigera-operator/tigera-operator-5bf8dfcb4-8l9ck" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:33.100477 kubelet[2714]: I0813 00:51:33.100368 2714 kubelet.go:2306] "Pod admission denied" podUID="23d7671d-7ed3-4064-8bb0-494b660c86b7" pod="tigera-operator/tigera-operator-5bf8dfcb4-vhfbb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:33.152184 kubelet[2714]: I0813 00:51:33.151585 2714 kubelet.go:2306] "Pod admission denied" podUID="399ad7e2-dac6-46ac-bd4d-ce62f17e8d22" pod="tigera-operator/tigera-operator-5bf8dfcb4-g7plt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:33.245343 kubelet[2714]: I0813 00:51:33.245298 2714 kubelet.go:2306] "Pod admission denied" podUID="30e2f29c-9252-4513-993b-021909baa2a4" pod="tigera-operator/tigera-operator-5bf8dfcb4-6r4mx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:33.347641 kubelet[2714]: I0813 00:51:33.347596 2714 kubelet.go:2306] "Pod admission denied" podUID="d71fdcf3-b0a4-41bd-bcc9-7141cb8594af" pod="tigera-operator/tigera-operator-5bf8dfcb4-t6zd2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:33.456537 kubelet[2714]: I0813 00:51:33.456401 2714 kubelet.go:2306] "Pod admission denied" podUID="0973b2a9-1099-4908-964e-4702abe7a9e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-6k7jp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:33.548871 kubelet[2714]: I0813 00:51:33.548831 2714 kubelet.go:2306] "Pod admission denied" podUID="cf991583-9161-44b4-b02b-33e83a79ce86" pod="tigera-operator/tigera-operator-5bf8dfcb4-brmmj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:33.592363 kubelet[2714]: I0813 00:51:33.592330 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:33.592363 kubelet[2714]: I0813 00:51:33.592360 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:51:33.593946 kubelet[2714]: I0813 00:51:33.593918 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:33.610349 kubelet[2714]: I0813 00:51:33.610310 2714 kubelet.go:2306] "Pod admission denied" podUID="207664ce-cd51-4ce1-9d51-a1faeaf7bf2f" pod="tigera-operator/tigera-operator-5bf8dfcb4-vm97m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:33.611845 kubelet[2714]: I0813 00:51:33.611655 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:33.611845 kubelet[2714]: I0813 00:51:33.611725 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","calico-system/csi-node-driver-mmxc6","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:51:33.611845 kubelet[2714]: E0813 00:51:33.611745 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:33.611845 kubelet[2714]: E0813 00:51:33.611755 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:33.611845 kubelet[2714]: E0813 00:51:33.611761 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:33.611845 kubelet[2714]: E0813 00:51:33.611767 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:51:33.611845 kubelet[2714]: E0813 00:51:33.611773 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:33.611845 kubelet[2714]: E0813 00:51:33.611783 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:51:33.611845 kubelet[2714]: E0813 00:51:33.611791 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:51:33.611845 kubelet[2714]: E0813 00:51:33.611799 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:51:33.611845 kubelet[2714]: E0813 00:51:33.611807 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:51:33.611845 kubelet[2714]: E0813 00:51:33.611815 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:51:33.611845 kubelet[2714]: I0813 00:51:33.611825 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:33.698412 kubelet[2714]: I0813 00:51:33.698352 2714 kubelet.go:2306] "Pod admission denied" podUID="e14276b1-c1ed-4c41-a4e3-86fef42ac94b" pod="tigera-operator/tigera-operator-5bf8dfcb4-q8g85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:33.897599 kubelet[2714]: I0813 00:51:33.897541 2714 kubelet.go:2306] "Pod admission denied" podUID="73c157ef-4580-4678-a7ad-a99bfe7a8639" pod="tigera-operator/tigera-operator-5bf8dfcb4-kgxns" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:33.957038 containerd[1575]: time="2025-08-13T00:51:33.956347359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:51:33.957625 containerd[1575]: time="2025-08-13T00:51:33.957604233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:51:34.018300 kubelet[2714]: I0813 00:51:34.018258 2714 kubelet.go:2306] "Pod admission denied" podUID="174d6530-1b97-4fb1-80bb-aa8e83389462" pod="tigera-operator/tigera-operator-5bf8dfcb4-q25gf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:34.025414 containerd[1575]: time="2025-08-13T00:51:34.023492695Z" level=error msg="Failed to destroy network for sandbox \"754cf98cddb7d11f7c6ca3b016d54d3230fede8418ed25f8d4b90e53d42775db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:34.027347 systemd[1]: run-netns-cni\x2d2a0cda21\x2d0c19\x2d0f3d\x2d99f9\x2d8c96119857e3.mount: Deactivated successfully. Aug 13 00:51:34.029211 containerd[1575]: time="2025-08-13T00:51:34.029173904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"754cf98cddb7d11f7c6ca3b016d54d3230fede8418ed25f8d4b90e53d42775db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:34.030247 kubelet[2714]: E0813 00:51:34.030216 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754cf98cddb7d11f7c6ca3b016d54d3230fede8418ed25f8d4b90e53d42775db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:34.030295 kubelet[2714]: E0813 00:51:34.030256 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754cf98cddb7d11f7c6ca3b016d54d3230fede8418ed25f8d4b90e53d42775db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:34.030295 kubelet[2714]: E0813 00:51:34.030274 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"754cf98cddb7d11f7c6ca3b016d54d3230fede8418ed25f8d4b90e53d42775db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:34.031764 kubelet[2714]: E0813 00:51:34.031727 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"754cf98cddb7d11f7c6ca3b016d54d3230fede8418ed25f8d4b90e53d42775db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:51:34.098300 kubelet[2714]: I0813 00:51:34.098244 2714 kubelet.go:2306] "Pod admission denied" podUID="e8a66ed0-57af-4367-ab3f-03f458b8a642" pod="tigera-operator/tigera-operator-5bf8dfcb4-sljcx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:34.201726 kubelet[2714]: I0813 00:51:34.201609 2714 kubelet.go:2306] "Pod admission denied" podUID="f855a3a2-287f-44eb-9861-a8336c6b14ec" pod="tigera-operator/tigera-operator-5bf8dfcb4-nk4gn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:34.298033 kubelet[2714]: I0813 00:51:34.297979 2714 kubelet.go:2306] "Pod admission denied" podUID="79ee017f-11d9-4e80-9b8d-e8518eb16e33" pod="tigera-operator/tigera-operator-5bf8dfcb4-kcnw7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:34.397590 kubelet[2714]: I0813 00:51:34.397504 2714 kubelet.go:2306] "Pod admission denied" podUID="d7c2ee0d-72f2-4fa0-8706-d11bbf5d2289" pod="tigera-operator/tigera-operator-5bf8dfcb4-bg4qf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:34.500958 kubelet[2714]: I0813 00:51:34.500699 2714 kubelet.go:2306] "Pod admission denied" podUID="7e2b5293-4171-4ae6-ade3-695ac5fc0e3b" pod="tigera-operator/tigera-operator-5bf8dfcb4-k7jtm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:34.599250 kubelet[2714]: I0813 00:51:34.599208 2714 kubelet.go:2306] "Pod admission denied" podUID="db007956-82f0-47de-a34f-07e2ede7c7d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-62zgm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:34.703710 kubelet[2714]: I0813 00:51:34.703608 2714 kubelet.go:2306] "Pod admission denied" podUID="b56e72af-2cb3-439b-a959-4653c5ef3e97" pod="tigera-operator/tigera-operator-5bf8dfcb4-2xg48" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:34.803552 kubelet[2714]: I0813 00:51:34.803044 2714 kubelet.go:2306] "Pod admission denied" podUID="14b37e0a-dd63-4be1-9266-2af65a854e3c" pod="tigera-operator/tigera-operator-5bf8dfcb4-g5w9p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:34.901051 kubelet[2714]: I0813 00:51:34.901006 2714 kubelet.go:2306] "Pod admission denied" podUID="583e8d52-eda0-4454-b0b5-cf93dae2a4a5" pod="tigera-operator/tigera-operator-5bf8dfcb4-b9w6q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:34.956023 kubelet[2714]: E0813 00:51:34.955972 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:34.956126 containerd[1575]: time="2025-08-13T00:51:34.956005789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:51:34.959900 containerd[1575]: time="2025-08-13T00:51:34.959716041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:35.043634 containerd[1575]: time="2025-08-13T00:51:35.043583247Z" level=error msg="Failed to destroy network for sandbox \"dbbb2b3f2366d2c6df7f0e9165afaeb2343933fb34b0cb4872853c8c7165d815\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:35.044884 containerd[1575]: time="2025-08-13T00:51:35.044582680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbb2b3f2366d2c6df7f0e9165afaeb2343933fb34b0cb4872853c8c7165d815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:35.044975 kubelet[2714]: E0813 00:51:35.044799 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbb2b3f2366d2c6df7f0e9165afaeb2343933fb34b0cb4872853c8c7165d815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:35.044975 kubelet[2714]: E0813 00:51:35.044873 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbb2b3f2366d2c6df7f0e9165afaeb2343933fb34b0cb4872853c8c7165d815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:35.046876 kubelet[2714]: E0813 00:51:35.044892 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbb2b3f2366d2c6df7f0e9165afaeb2343933fb34b0cb4872853c8c7165d815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:35.046876 kubelet[2714]: E0813 00:51:35.045622 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbbb2b3f2366d2c6df7f0e9165afaeb2343933fb34b0cb4872853c8c7165d815\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:51:35.049069 systemd[1]: run-netns-cni\x2dba296cb3\x2d5810\x2dfcab\x2da69e\x2dc346ba6b031b.mount: Deactivated successfully. Aug 13 00:51:35.058036 containerd[1575]: time="2025-08-13T00:51:35.054733753Z" level=error msg="Failed to destroy network for sandbox \"4be0de2f72b1bb6ff27edeaeb3a2d529e675b31c192fbf112beb15ca89f4d00c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:35.058036 containerd[1575]: time="2025-08-13T00:51:35.055565686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be0de2f72b1bb6ff27edeaeb3a2d529e675b31c192fbf112beb15ca89f4d00c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:35.057039 systemd[1]: run-netns-cni\x2dee8362d6\x2d93c2\x2dd7b6\x2d71dc\x2db7ab59382d4a.mount: Deactivated successfully. Aug 13 00:51:35.058175 kubelet[2714]: E0813 00:51:35.055741 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be0de2f72b1bb6ff27edeaeb3a2d529e675b31c192fbf112beb15ca89f4d00c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:35.058175 kubelet[2714]: E0813 00:51:35.055772 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be0de2f72b1bb6ff27edeaeb3a2d529e675b31c192fbf112beb15ca89f4d00c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:35.058175 kubelet[2714]: E0813 00:51:35.055788 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be0de2f72b1bb6ff27edeaeb3a2d529e675b31c192fbf112beb15ca89f4d00c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:35.058175 kubelet[2714]: E0813 00:51:35.055814 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4be0de2f72b1bb6ff27edeaeb3a2d529e675b31c192fbf112beb15ca89f4d00c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:51:35.106546 kubelet[2714]: I0813 00:51:35.106456 2714 kubelet.go:2306] "Pod admission denied" podUID="edf68777-b9d5-42cd-bede-c6e5fd09be22" pod="tigera-operator/tigera-operator-5bf8dfcb4-wxr9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:35.209857 kubelet[2714]: I0813 00:51:35.209729 2714 kubelet.go:2306] "Pod admission denied" podUID="a5b6b20e-f6d5-4256-8a35-3a82d3cf76d6" pod="tigera-operator/tigera-operator-5bf8dfcb4-jwtsq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:35.300391 kubelet[2714]: I0813 00:51:35.300344 2714 kubelet.go:2306] "Pod admission denied" podUID="29ef15af-83e0-4f7c-8090-f81cdafdfbf1" pod="tigera-operator/tigera-operator-5bf8dfcb4-5xnrn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:35.401718 kubelet[2714]: I0813 00:51:35.401619 2714 kubelet.go:2306] "Pod admission denied" podUID="8beac11b-ed02-4aaa-a13c-83c56c71df65" pod="tigera-operator/tigera-operator-5bf8dfcb4-mkp79" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:35.505218 kubelet[2714]: I0813 00:51:35.505166 2714 kubelet.go:2306] "Pod admission denied" podUID="be8e2c43-6ced-45a9-9df8-5cc72bebe22c" pod="tigera-operator/tigera-operator-5bf8dfcb4-rw648" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:35.598728 kubelet[2714]: I0813 00:51:35.598667 2714 kubelet.go:2306] "Pod admission denied" podUID="35be7bbc-0d31-4bd0-b0b5-0db3bd9f07cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-gng5k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:35.724293 kubelet[2714]: I0813 00:51:35.724006 2714 kubelet.go:2306] "Pod admission denied" podUID="38635b0f-1a29-418d-98d6-3a03a26c74e0" pod="tigera-operator/tigera-operator-5bf8dfcb4-46tq6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:35.908448 kubelet[2714]: I0813 00:51:35.908363 2714 kubelet.go:2306] "Pod admission denied" podUID="38adf194-ca9c-4af3-8c42-3b1993910390" pod="tigera-operator/tigera-operator-5bf8dfcb4-gk9l5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:36.008908 kubelet[2714]: I0813 00:51:36.008779 2714 kubelet.go:2306] "Pod admission denied" podUID="c2c76448-64f1-41ae-ba4b-b77104209f29" pod="tigera-operator/tigera-operator-5bf8dfcb4-b6bgr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:36.114325 kubelet[2714]: I0813 00:51:36.114270 2714 kubelet.go:2306] "Pod admission denied" podUID="a7280a58-5a03-40c8-8b26-68de435aff89" pod="tigera-operator/tigera-operator-5bf8dfcb4-pvr7f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:36.201367 kubelet[2714]: I0813 00:51:36.200576 2714 kubelet.go:2306] "Pod admission denied" podUID="39ab483a-38e3-4c44-ab26-ab8ce692cc30" pod="tigera-operator/tigera-operator-5bf8dfcb4-ldj7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:36.326026 kubelet[2714]: I0813 00:51:36.325550 2714 kubelet.go:2306] "Pod admission denied" podUID="7d5a881a-2f40-489b-9e49-728b98a5034a" pod="tigera-operator/tigera-operator-5bf8dfcb4-lh8zl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:36.405613 kubelet[2714]: I0813 00:51:36.405580 2714 kubelet.go:2306] "Pod admission denied" podUID="fc13e4bd-7f6d-44ff-ab2e-31d34e55a646" pod="tigera-operator/tigera-operator-5bf8dfcb4-bshmv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:36.501307 kubelet[2714]: I0813 00:51:36.501274 2714 kubelet.go:2306] "Pod admission denied" podUID="4f5e8bc9-eb7b-488e-967e-07762f2912d2" pod="tigera-operator/tigera-operator-5bf8dfcb4-jg7r6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:36.601745 kubelet[2714]: I0813 00:51:36.601421 2714 kubelet.go:2306] "Pod admission denied" podUID="540d27ee-1219-4a22-9139-0793cddd7238" pod="tigera-operator/tigera-operator-5bf8dfcb4-zwpw6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:36.710264 kubelet[2714]: I0813 00:51:36.709798 2714 kubelet.go:2306] "Pod admission denied" podUID="9dc78f21-dfea-489f-b768-aee3f92e2e03" pod="tigera-operator/tigera-operator-5bf8dfcb4-hx87g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:36.782627 containerd[1575]: time="2025-08-13T00:51:36.778674370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount968185550: write /var/lib/containerd/tmpmounts/containerd-mount968185550/usr/bin/calico-node: no space left on device" Aug 13 00:51:36.782627 containerd[1575]: time="2025-08-13T00:51:36.781664440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 00:51:36.781914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968185550.mount: Deactivated successfully. Aug 13 00:51:36.783328 kubelet[2714]: E0813 00:51:36.782644 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount968185550: write /var/lib/containerd/tmpmounts/containerd-mount968185550/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 00:51:36.783328 kubelet[2714]: E0813 00:51:36.782691 2714 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount968185550: write /var/lib/containerd/tmpmounts/containerd-mount968185550/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 00:51:36.783409 kubelet[2714]: E0813 00:51:36.782852 2714 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmm4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-x7x94_calico-system(ab709cf9-e61c-420b-90c5-1c0355308621): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount968185550: write /var/lib/containerd/tmpmounts/containerd-mount968185550/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 00:51:36.784157 kubelet[2714]: E0813 00:51:36.784132 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount968185550: write /var/lib/containerd/tmpmounts/containerd-mount968185550/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:51:36.912083 kubelet[2714]: I0813 00:51:36.912035 2714 kubelet.go:2306] "Pod admission denied" podUID="da9ce4cd-b3fa-427b-8ed2-afb6b97672f0" pod="tigera-operator/tigera-operator-5bf8dfcb4-mw866" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:36.998704 kubelet[2714]: I0813 00:51:36.998649 2714 kubelet.go:2306] "Pod admission denied" podUID="76f7a8bc-9565-4e4b-8c6d-d6c812e2f246" pod="tigera-operator/tigera-operator-5bf8dfcb4-c88q4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:37.100643 kubelet[2714]: I0813 00:51:37.100365 2714 kubelet.go:2306] "Pod admission denied" podUID="798f81b8-47d6-4e05-b5a1-c98f913041ea" pod="tigera-operator/tigera-operator-5bf8dfcb4-t82d5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:37.203503 kubelet[2714]: I0813 00:51:37.203371 2714 kubelet.go:2306] "Pod admission denied" podUID="6f79f911-fba3-4013-bcc8-d8a27a30d56b" pod="tigera-operator/tigera-operator-5bf8dfcb4-7rbjl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:37.299812 kubelet[2714]: I0813 00:51:37.299744 2714 kubelet.go:2306] "Pod admission denied" podUID="8874a437-9454-4bd4-a194-0e2ea2d2be21" pod="tigera-operator/tigera-operator-5bf8dfcb4-nlgnn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:37.399537 kubelet[2714]: I0813 00:51:37.399471 2714 kubelet.go:2306] "Pod admission denied" podUID="72f292c8-7986-4041-87a8-fd21a90e7c07" pod="tigera-operator/tigera-operator-5bf8dfcb4-kk8v7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:37.503483 kubelet[2714]: I0813 00:51:37.503369 2714 kubelet.go:2306] "Pod admission denied" podUID="9546b3f8-88c3-4150-92a2-5bd6cdeaa588" pod="tigera-operator/tigera-operator-5bf8dfcb4-fzff5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:37.601024 kubelet[2714]: I0813 00:51:37.600966 2714 kubelet.go:2306] "Pod admission denied" podUID="b1d0e284-90bf-46ad-bd39-6cd41a72d271" pod="tigera-operator/tigera-operator-5bf8dfcb4-j9jnh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:37.649002 kubelet[2714]: I0813 00:51:37.648949 2714 kubelet.go:2306] "Pod admission denied" podUID="789433d4-a709-45b4-a236-f475c5cf375b" pod="tigera-operator/tigera-operator-5bf8dfcb4-2nnbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:37.756609 kubelet[2714]: I0813 00:51:37.755753 2714 kubelet.go:2306] "Pod admission denied" podUID="e7a97705-11a4-4297-ac51-8ff3e023d34e" pod="tigera-operator/tigera-operator-5bf8dfcb4-9pgg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:37.849003 kubelet[2714]: I0813 00:51:37.848960 2714 kubelet.go:2306] "Pod admission denied" podUID="681e9391-f2f6-4d25-8629-b59516d0ae7c" pod="tigera-operator/tigera-operator-5bf8dfcb4-5t7sx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:37.988949 kubelet[2714]: I0813 00:51:37.988319 2714 kubelet.go:2306] "Pod admission denied" podUID="e0839a16-ee20-400d-a249-39e65fa77ba4" pod="tigera-operator/tigera-operator-5bf8dfcb4-ffzrw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:38.149903 kubelet[2714]: I0813 00:51:38.149850 2714 kubelet.go:2306] "Pod admission denied" podUID="b57934d2-a1a3-4327-9fb1-007c44fb821e" pod="tigera-operator/tigera-operator-5bf8dfcb4-2lx4d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:38.252863 kubelet[2714]: I0813 00:51:38.252821 2714 kubelet.go:2306] "Pod admission denied" podUID="3af620cc-4d89-4e82-878d-01f0005030a4" pod="tigera-operator/tigera-operator-5bf8dfcb4-6ljxn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:38.364541 kubelet[2714]: I0813 00:51:38.363820 2714 kubelet.go:2306] "Pod admission denied" podUID="35682360-c512-45dc-84fc-d89b8b6848d0" pod="tigera-operator/tigera-operator-5bf8dfcb4-scmps" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:38.450412 kubelet[2714]: I0813 00:51:38.450294 2714 kubelet.go:2306] "Pod admission denied" podUID="e38bd2d0-53d0-4f9b-b1ef-2e6f48491a5d" pod="tigera-operator/tigera-operator-5bf8dfcb4-f5gpj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:38.550255 kubelet[2714]: I0813 00:51:38.550196 2714 kubelet.go:2306] "Pod admission denied" podUID="1412348a-b392-4c6a-90a3-6148f0ed9d09" pod="tigera-operator/tigera-operator-5bf8dfcb4-nl6p9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:38.650429 kubelet[2714]: I0813 00:51:38.650390 2714 kubelet.go:2306] "Pod admission denied" podUID="afd4ffbe-9744-4967-b20b-f593a24aa177" pod="tigera-operator/tigera-operator-5bf8dfcb4-cbtg9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:38.699813 kubelet[2714]: I0813 00:51:38.699758 2714 kubelet.go:2306] "Pod admission denied" podUID="6f13ba6a-f08e-486c-9434-38c7221c5ada" pod="tigera-operator/tigera-operator-5bf8dfcb4-gh4ht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:38.810556 kubelet[2714]: I0813 00:51:38.807357 2714 kubelet.go:2306] "Pod admission denied" podUID="5ca40dce-abfe-425b-ab96-de57449b0365" pod="tigera-operator/tigera-operator-5bf8dfcb4-wmgls" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:38.902431 kubelet[2714]: I0813 00:51:38.902354 2714 kubelet.go:2306] "Pod admission denied" podUID="d24e0f39-60af-4648-906c-23d2fe331f95" pod="tigera-operator/tigera-operator-5bf8dfcb4-pd6dh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:38.949060 kubelet[2714]: I0813 00:51:38.949001 2714 kubelet.go:2306] "Pod admission denied" podUID="ca0dfdd8-8bee-4bd2-a385-507d9ae9112f" pod="tigera-operator/tigera-operator-5bf8dfcb4-rpnfm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.047983 kubelet[2714]: I0813 00:51:39.047924 2714 kubelet.go:2306] "Pod admission denied" podUID="f1ff3cf2-dd36-438d-85a6-4f717ddd5d2f" pod="tigera-operator/tigera-operator-5bf8dfcb4-dzthv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.149812 kubelet[2714]: I0813 00:51:39.149753 2714 kubelet.go:2306] "Pod admission denied" podUID="207757b2-5576-4a6d-98ee-689e4137bbd3" pod="tigera-operator/tigera-operator-5bf8dfcb4-z7dkz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.202584 kubelet[2714]: I0813 00:51:39.202537 2714 kubelet.go:2306] "Pod admission denied" podUID="601fecf3-c647-42dc-bd88-6e4487369fda" pod="tigera-operator/tigera-operator-5bf8dfcb4-jmzlp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.300936 kubelet[2714]: I0813 00:51:39.300885 2714 kubelet.go:2306] "Pod admission denied" podUID="826d4ca9-2541-4307-9726-0f9e3b16c2ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-x72l6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.400553 kubelet[2714]: I0813 00:51:39.400117 2714 kubelet.go:2306] "Pod admission denied" podUID="508ace2b-124e-40e4-8175-e89ae8402e04" pod="tigera-operator/tigera-operator-5bf8dfcb4-vp2x4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.513248 kubelet[2714]: I0813 00:51:39.513181 2714 kubelet.go:2306] "Pod admission denied" podUID="7ccf340e-29a9-4473-9ad2-38ba7c5e6c6c" pod="tigera-operator/tigera-operator-5bf8dfcb4-v5jbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.598430 kubelet[2714]: I0813 00:51:39.598380 2714 kubelet.go:2306] "Pod admission denied" podUID="d007c0bd-1254-424a-90fd-8f038c609312" pod="tigera-operator/tigera-operator-5bf8dfcb4-wlshx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.702930 kubelet[2714]: I0813 00:51:39.702639 2714 kubelet.go:2306] "Pod admission denied" podUID="b7fdc0f8-f901-48a2-8200-0eff1742f518" pod="tigera-operator/tigera-operator-5bf8dfcb4-qpptf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.803844 kubelet[2714]: I0813 00:51:39.803785 2714 kubelet.go:2306] "Pod admission denied" podUID="5384066d-cbbb-4de0-a714-be10a282c613" pod="tigera-operator/tigera-operator-5bf8dfcb4-6q2zn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.911856 kubelet[2714]: I0813 00:51:39.911544 2714 kubelet.go:2306] "Pod admission denied" podUID="ce3fc3f3-b2f3-46d8-8da0-d6e2b9f94c96" pod="tigera-operator/tigera-operator-5bf8dfcb4-x8fwk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:39.998729 kubelet[2714]: I0813 00:51:39.998625 2714 kubelet.go:2306] "Pod admission denied" podUID="7de19e3e-5189-4b1b-acf1-9de8e4d138c9" pod="tigera-operator/tigera-operator-5bf8dfcb4-gk7fc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:40.057556 kubelet[2714]: I0813 00:51:40.056811 2714 kubelet.go:2306] "Pod admission denied" podUID="95539cef-a887-40f8-bffb-80c63c1feb28" pod="tigera-operator/tigera-operator-5bf8dfcb4-zj557" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:40.146757 kubelet[2714]: I0813 00:51:40.146704 2714 kubelet.go:2306] "Pod admission denied" podUID="efcc2606-cd61-4cc5-aac7-1bbeed1fdc8c" pod="tigera-operator/tigera-operator-5bf8dfcb4-7jdkd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:40.433038 kubelet[2714]: I0813 00:51:40.432983 2714 kubelet.go:2306] "Pod admission denied" podUID="234720c8-e14a-400e-8924-f73a6c4ff8cb" pod="tigera-operator/tigera-operator-5bf8dfcb4-7htn9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:40.474538 kubelet[2714]: I0813 00:51:40.474242 2714 kubelet.go:2306] "Pod admission denied" podUID="25c14900-aed5-4b8b-a826-3854a60f4f7c" pod="tigera-operator/tigera-operator-5bf8dfcb4-chv4f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 00:51:41.117264 systemd[1]: Started sshd@11-172.234.199.101:22-147.75.109.163:53688.service - OpenSSH per-connection server daemon (147.75.109.163:53688). Aug 13 00:51:41.464340 sshd[4709]: Accepted publickey for core from 147.75.109.163 port 53688 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:41.465959 sshd-session[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:41.471619 systemd-logind[1528]: New session 8 of user core. Aug 13 00:51:41.478655 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:51:41.780029 sshd[4711]: Connection closed by 147.75.109.163 port 53688 Aug 13 00:51:41.780761 sshd-session[4709]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:41.786176 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:51:41.786349 systemd[1]: sshd@11-172.234.199.101:22-147.75.109.163:53688.service: Deactivated successfully. Aug 13 00:51:41.788979 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:51:41.790450 systemd-logind[1528]: Removed session 8. Aug 13 00:51:41.955783 kubelet[2714]: E0813 00:51:41.955755 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:41.957327 containerd[1575]: time="2025-08-13T00:51:41.957270236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:42.008130 containerd[1575]: time="2025-08-13T00:51:42.008085973Z" level=error msg="Failed to destroy network for sandbox \"e4383262e311a0394d66fbc161bbc2f4860b189661cd49816675fd974fca1644\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:42.010314 systemd[1]: run-netns-cni\x2d96899ab5\x2d2132\x2d83dd\x2daa50\x2deff757bc1dcc.mount: Deactivated successfully. Aug 13 00:51:42.011149 containerd[1575]: time="2025-08-13T00:51:42.010963381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4383262e311a0394d66fbc161bbc2f4860b189661cd49816675fd974fca1644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:42.011247 kubelet[2714]: E0813 00:51:42.011215 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4383262e311a0394d66fbc161bbc2f4860b189661cd49816675fd974fca1644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:42.011727 kubelet[2714]: E0813 00:51:42.011635 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4383262e311a0394d66fbc161bbc2f4860b189661cd49816675fd974fca1644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:42.011727 kubelet[2714]: E0813 00:51:42.011661 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4383262e311a0394d66fbc161bbc2f4860b189661cd49816675fd974fca1644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:42.011829 kubelet[2714]: E0813 00:51:42.011722 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4383262e311a0394d66fbc161bbc2f4860b189661cd49816675fd974fca1644\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:51:43.625774 kubelet[2714]: I0813 00:51:43.625732 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:43.625774 kubelet[2714]: I0813 00:51:43.625770 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:51:43.627844 kubelet[2714]: I0813 00:51:43.627780 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:43.630341 kubelet[2714]: I0813 00:51:43.630082 2714 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" size=56909194 runtimeHandler="" Aug 13 00:51:43.630759 containerd[1575]: time="2025-08-13T00:51:43.630564647Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:51:43.631661 containerd[1575]: time="2025-08-13T00:51:43.631612550Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:51:43.632560 containerd[1575]: time="2025-08-13T00:51:43.632537962Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"" Aug 13 00:51:43.632906 containerd[1575]: time="2025-08-13T00:51:43.632866703Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" returns successfully" Aug 13 00:51:43.633059 containerd[1575]: time="2025-08-13T00:51:43.632983233Z" level=info msg="ImageDelete event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:51:43.633092 kubelet[2714]: I0813 00:51:43.633044 2714 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 00:51:43.633388 containerd[1575]: time="2025-08-13T00:51:43.633163864Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:51:43.633969 containerd[1575]: time="2025-08-13T00:51:43.633945356Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:51:43.634438 containerd[1575]: time="2025-08-13T00:51:43.634416968Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 00:51:43.634825 containerd[1575]: time="2025-08-13T00:51:43.634804799Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 00:51:43.635096 containerd[1575]: time="2025-08-13T00:51:43.635064329Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:51:43.644393 kubelet[2714]: I0813 00:51:43.644352 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:43.644554 kubelet[2714]: I0813 00:51:43.644455 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","calico-system/csi-node-driver-mmxc6","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:51:43.644554 kubelet[2714]: E0813 00:51:43.644487 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:43.644554 kubelet[2714]: E0813 00:51:43.644497 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:43.644554 kubelet[2714]: E0813 00:51:43.644503 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:43.644554 kubelet[2714]: E0813 00:51:43.644510 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:51:43.644554 kubelet[2714]: E0813 00:51:43.644558 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:43.644723 kubelet[2714]: E0813 00:51:43.644573 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:51:43.644723 kubelet[2714]: E0813 00:51:43.644582 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:51:43.644723 kubelet[2714]: E0813 00:51:43.644589 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:51:43.644723 kubelet[2714]: E0813 00:51:43.644597 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:51:43.644723 kubelet[2714]: E0813 00:51:43.644605 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:51:43.644723 kubelet[2714]: I0813 00:51:43.644615 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:46.841568 systemd[1]: Started sshd@12-172.234.199.101:22-147.75.109.163:53704.service - OpenSSH per-connection server daemon (147.75.109.163:53704). Aug 13 00:51:47.179988 sshd[4751]: Accepted publickey for core from 147.75.109.163 port 53704 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:47.181315 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:47.186450 systemd-logind[1528]: New session 9 of user core. Aug 13 00:51:47.193647 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:51:47.479039 sshd[4753]: Connection closed by 147.75.109.163 port 53704 Aug 13 00:51:47.479703 sshd-session[4751]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:47.484660 systemd[1]: sshd@12-172.234.199.101:22-147.75.109.163:53704.service: Deactivated successfully. Aug 13 00:51:47.486713 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:51:47.488585 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:51:47.490123 systemd-logind[1528]: Removed session 9. Aug 13 00:51:48.956744 containerd[1575]: time="2025-08-13T00:51:48.955731298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:51:48.956744 containerd[1575]: time="2025-08-13T00:51:48.956437030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:48.956744 containerd[1575]: time="2025-08-13T00:51:48.956655530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:51:48.957205 kubelet[2714]: E0813 00:51:48.955848 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:48.957616 kubelet[2714]: E0813 00:51:48.957576 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:51:49.054801 containerd[1575]: time="2025-08-13T00:51:49.054652629Z" level=error msg="Failed to destroy network for sandbox \"75de54d53839be8a21329d7fa2d948edd0e0ef4ed1eaaac9a7675adf986f6933\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:49.056761 containerd[1575]: time="2025-08-13T00:51:49.056729244Z" level=error msg="Failed to destroy network for sandbox \"b566d7fe9c471b0861b43a7758b6bd13387683a08ac00ff5858934399aa27d0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:49.057894 containerd[1575]: time="2025-08-13T00:51:49.057568967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75de54d53839be8a21329d7fa2d948edd0e0ef4ed1eaaac9a7675adf986f6933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:49.057688 systemd[1]: run-netns-cni\x2d8e3484a4\x2db909\x2dd122\x2d2f55\x2dbfbf54531d86.mount: Deactivated successfully. Aug 13 00:51:49.061468 systemd[1]: run-netns-cni\x2d8702ef92\x2d80fa\x2d51dd\x2df450\x2d5d89895c2678.mount: Deactivated successfully. Aug 13 00:51:49.062550 containerd[1575]: time="2025-08-13T00:51:49.062350979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b566d7fe9c471b0861b43a7758b6bd13387683a08ac00ff5858934399aa27d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:49.063725 kubelet[2714]: E0813 00:51:49.063700 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b566d7fe9c471b0861b43a7758b6bd13387683a08ac00ff5858934399aa27d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:49.063864 kubelet[2714]: E0813 00:51:49.063848 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b566d7fe9c471b0861b43a7758b6bd13387683a08ac00ff5858934399aa27d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:49.063932 kubelet[2714]: E0813 00:51:49.063918 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b566d7fe9c471b0861b43a7758b6bd13387683a08ac00ff5858934399aa27d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:51:49.064030 kubelet[2714]: E0813 00:51:49.063997 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b566d7fe9c471b0861b43a7758b6bd13387683a08ac00ff5858934399aa27d0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:51:49.064311 kubelet[2714]: E0813 00:51:49.064292 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75de54d53839be8a21329d7fa2d948edd0e0ef4ed1eaaac9a7675adf986f6933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:49.064389 kubelet[2714]: E0813 00:51:49.064375 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75de54d53839be8a21329d7fa2d948edd0e0ef4ed1eaaac9a7675adf986f6933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:49.064760 kubelet[2714]: E0813 00:51:49.064443 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75de54d53839be8a21329d7fa2d948edd0e0ef4ed1eaaac9a7675adf986f6933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:51:49.064760 kubelet[2714]: E0813 00:51:49.064476 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75de54d53839be8a21329d7fa2d948edd0e0ef4ed1eaaac9a7675adf986f6933\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:51:49.077131 containerd[1575]: time="2025-08-13T00:51:49.077094776Z" level=error msg="Failed to destroy network for sandbox \"eedbd63d59f0b291fe02c1811a9a28274051d7bfb94050eac2ec6d79305b410f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:49.078118 containerd[1575]: time="2025-08-13T00:51:49.078082288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eedbd63d59f0b291fe02c1811a9a28274051d7bfb94050eac2ec6d79305b410f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:49.078375 kubelet[2714]: E0813 00:51:49.078350 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eedbd63d59f0b291fe02c1811a9a28274051d7bfb94050eac2ec6d79305b410f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:49.078476 kubelet[2714]: E0813 00:51:49.078461 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eedbd63d59f0b291fe02c1811a9a28274051d7bfb94050eac2ec6d79305b410f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:49.078556 kubelet[2714]: E0813 00:51:49.078536 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eedbd63d59f0b291fe02c1811a9a28274051d7bfb94050eac2ec6d79305b410f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:51:49.078610 kubelet[2714]: E0813 00:51:49.078586 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eedbd63d59f0b291fe02c1811a9a28274051d7bfb94050eac2ec6d79305b410f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:51:49.962301 systemd[1]: run-netns-cni\x2d93a2f1bc\x2d5f00\x2de910\x2da803\x2d8b98d9471f5a.mount: Deactivated successfully. Aug 13 00:51:52.548712 systemd[1]: Started sshd@13-172.234.199.101:22-147.75.109.163:42544.service - OpenSSH per-connection server daemon (147.75.109.163:42544). Aug 13 00:51:52.895928 sshd[4847]: Accepted publickey for core from 147.75.109.163 port 42544 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:52.897287 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:52.902903 systemd-logind[1528]: New session 10 of user core. Aug 13 00:51:52.908643 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:51:53.216738 sshd[4849]: Connection closed by 147.75.109.163 port 42544 Aug 13 00:51:53.217795 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:53.223011 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:51:53.223790 systemd[1]: sshd@13-172.234.199.101:22-147.75.109.163:42544.service: Deactivated successfully. Aug 13 00:51:53.226878 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:51:53.229513 systemd-logind[1528]: Removed session 10. Aug 13 00:51:53.279579 systemd[1]: Started sshd@14-172.234.199.101:22-147.75.109.163:42554.service - OpenSSH per-connection server daemon (147.75.109.163:42554). Aug 13 00:51:53.622002 sshd[4862]: Accepted publickey for core from 147.75.109.163 port 42554 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:53.623155 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:53.627569 systemd-logind[1528]: New session 11 of user core. Aug 13 00:51:53.630630 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:51:53.951857 sshd[4864]: Connection closed by 147.75.109.163 port 42554 Aug 13 00:51:53.952869 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:53.957175 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:51:53.958149 systemd[1]: sshd@14-172.234.199.101:22-147.75.109.163:42554.service: Deactivated successfully. Aug 13 00:51:53.961253 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:51:53.962859 systemd-logind[1528]: Removed session 11. Aug 13 00:51:54.014730 systemd[1]: Started sshd@15-172.234.199.101:22-147.75.109.163:42564.service - OpenSSH per-connection server daemon (147.75.109.163:42564). Aug 13 00:51:54.358169 sshd[4874]: Accepted publickey for core from 147.75.109.163 port 42564 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:54.359798 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:54.366080 systemd-logind[1528]: New session 12 of user core. Aug 13 00:51:54.369631 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:51:54.661689 sshd[4876]: Connection closed by 147.75.109.163 port 42564 Aug 13 00:51:54.662429 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:54.666639 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:51:54.667292 systemd[1]: sshd@15-172.234.199.101:22-147.75.109.163:42564.service: Deactivated successfully. Aug 13 00:51:54.669233 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:51:54.671386 systemd-logind[1528]: Removed session 12. Aug 13 00:51:56.955944 kubelet[2714]: E0813 00:51:56.955885 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:51:56.956969 containerd[1575]: time="2025-08-13T00:51:56.956415741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:57.010064 containerd[1575]: time="2025-08-13T00:51:57.010007532Z" level=error msg="Failed to destroy network for sandbox \"7edf5f5adaba0c63e4f096331e22e1e8684613c717246725b0082ceec87ebd7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:57.013770 systemd[1]: run-netns-cni\x2dca91b0b6\x2dca5a\x2dbfd2\x2d34a8\x2da3c5d08fd552.mount: Deactivated successfully. Aug 13 00:51:57.015733 containerd[1575]: time="2025-08-13T00:51:57.015684395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7edf5f5adaba0c63e4f096331e22e1e8684613c717246725b0082ceec87ebd7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:57.015962 kubelet[2714]: E0813 00:51:57.015927 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7edf5f5adaba0c63e4f096331e22e1e8684613c717246725b0082ceec87ebd7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:51:57.016023 kubelet[2714]: E0813 00:51:57.016001 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7edf5f5adaba0c63e4f096331e22e1e8684613c717246725b0082ceec87ebd7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:57.016049 kubelet[2714]: E0813 00:51:57.016026 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7edf5f5adaba0c63e4f096331e22e1e8684613c717246725b0082ceec87ebd7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:51:57.016113 kubelet[2714]: E0813 00:51:57.016088 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7edf5f5adaba0c63e4f096331e22e1e8684613c717246725b0082ceec87ebd7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:51:59.725575 systemd[1]: Started sshd@16-172.234.199.101:22-147.75.109.163:48986.service - OpenSSH per-connection server daemon (147.75.109.163:48986). Aug 13 00:51:59.957353 kubelet[2714]: E0813 00:51:59.957230 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:52:00.079979 sshd[4914]: Accepted publickey for core from 147.75.109.163 port 48986 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:00.082871 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:00.092237 systemd-logind[1528]: New session 13 of user core. Aug 13 00:52:00.095669 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:52:00.387540 sshd[4916]: Connection closed by 147.75.109.163 port 48986 Aug 13 00:52:00.388187 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:00.392826 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:52:00.393794 systemd[1]: sshd@16-172.234.199.101:22-147.75.109.163:48986.service: Deactivated successfully. Aug 13 00:52:00.395925 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:52:00.398029 systemd-logind[1528]: Removed session 13. Aug 13 00:52:00.450740 systemd[1]: Started sshd@17-172.234.199.101:22-147.75.109.163:49000.service - OpenSSH per-connection server daemon (147.75.109.163:49000). Aug 13 00:52:00.803044 sshd[4928]: Accepted publickey for core from 147.75.109.163 port 49000 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:00.805346 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:00.815643 systemd-logind[1528]: New session 14 of user core. Aug 13 00:52:00.819662 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:52:00.955586 kubelet[2714]: E0813 00:52:00.955552 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:00.956093 containerd[1575]: time="2025-08-13T00:52:00.956045543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:01.006854 containerd[1575]: time="2025-08-13T00:52:01.006812141Z" level=error msg="Failed to destroy network for sandbox \"70ebe277e78f547d007ecc8035df9350829eb0db5649e2014adac6a67f231c01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:01.008305 containerd[1575]: time="2025-08-13T00:52:01.008269434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ebe277e78f547d007ecc8035df9350829eb0db5649e2014adac6a67f231c01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:01.008717 kubelet[2714]: E0813 00:52:01.008674 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ebe277e78f547d007ecc8035df9350829eb0db5649e2014adac6a67f231c01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:01.008990 kubelet[2714]: E0813 00:52:01.008958 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ebe277e78f547d007ecc8035df9350829eb0db5649e2014adac6a67f231c01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:52:01.008990 kubelet[2714]: E0813 00:52:01.008982 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ebe277e78f547d007ecc8035df9350829eb0db5649e2014adac6a67f231c01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:52:01.010187 systemd[1]: run-netns-cni\x2db7f25c69\x2dc2bd\x2db1e5\x2d08c5\x2de068776e8def.mount: Deactivated successfully. Aug 13 00:52:01.011556 kubelet[2714]: E0813 00:52:01.009183 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70ebe277e78f547d007ecc8035df9350829eb0db5649e2014adac6a67f231c01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:52:01.143476 sshd[4930]: Connection closed by 147.75.109.163 port 49000 Aug 13 00:52:01.145356 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:01.150275 systemd[1]: sshd@17-172.234.199.101:22-147.75.109.163:49000.service: Deactivated successfully. Aug 13 00:52:01.153022 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:52:01.153915 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:52:01.155860 systemd-logind[1528]: Removed session 14. Aug 13 00:52:01.204604 systemd[1]: Started sshd@18-172.234.199.101:22-147.75.109.163:49008.service - OpenSSH per-connection server daemon (147.75.109.163:49008). Aug 13 00:52:01.543565 sshd[4965]: Accepted publickey for core from 147.75.109.163 port 49008 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:01.544840 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:01.550040 systemd-logind[1528]: New session 15 of user core. Aug 13 00:52:01.554639 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:52:03.137736 sshd[4967]: Connection closed by 147.75.109.163 port 49008 Aug 13 00:52:03.138337 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:03.141468 systemd[1]: sshd@18-172.234.199.101:22-147.75.109.163:49008.service: Deactivated successfully. Aug 13 00:52:03.143612 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:52:03.143937 systemd[1]: session-15.scope: Consumed 471ms CPU time, 71.5M memory peak. Aug 13 00:52:03.145154 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:52:03.147947 systemd-logind[1528]: Removed session 15. Aug 13 00:52:03.196672 systemd[1]: Started sshd@19-172.234.199.101:22-147.75.109.163:49012.service - OpenSSH per-connection server daemon (147.75.109.163:49012). Aug 13 00:52:03.534354 sshd[4985]: Accepted publickey for core from 147.75.109.163 port 49012 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:03.535806 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:03.543744 systemd-logind[1528]: New session 16 of user core. Aug 13 00:52:03.548660 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:52:03.930757 sshd[4987]: Connection closed by 147.75.109.163 port 49012 Aug 13 00:52:03.931411 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:03.935707 systemd[1]: sshd@19-172.234.199.101:22-147.75.109.163:49012.service: Deactivated successfully. Aug 13 00:52:03.938003 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:52:03.939074 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:52:03.941345 systemd-logind[1528]: Removed session 16. Aug 13 00:52:03.955662 containerd[1575]: time="2025-08-13T00:52:03.955622536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:03.999847 systemd[1]: Started sshd@20-172.234.199.101:22-147.75.109.163:49026.service - OpenSSH per-connection server daemon (147.75.109.163:49026). Aug 13 00:52:04.028777 containerd[1575]: time="2025-08-13T00:52:04.028478294Z" level=error msg="Failed to destroy network for sandbox \"64f065acf63904b16a7e7e14b1c86fb0b80d3b9b3f85876a756fdf106794ff75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:04.034274 systemd[1]: run-netns-cni\x2dcabafdb5\x2de679\x2d7a9e\x2d5819\x2da95fb3fd9280.mount: Deactivated successfully. Aug 13 00:52:04.034455 containerd[1575]: time="2025-08-13T00:52:04.034419416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f065acf63904b16a7e7e14b1c86fb0b80d3b9b3f85876a756fdf106794ff75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:04.034867 kubelet[2714]: E0813 00:52:04.034830 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f065acf63904b16a7e7e14b1c86fb0b80d3b9b3f85876a756fdf106794ff75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:04.035587 kubelet[2714]: E0813 00:52:04.035237 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f065acf63904b16a7e7e14b1c86fb0b80d3b9b3f85876a756fdf106794ff75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:52:04.035587 kubelet[2714]: E0813 00:52:04.035269 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64f065acf63904b16a7e7e14b1c86fb0b80d3b9b3f85876a756fdf106794ff75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:52:04.035587 kubelet[2714]: E0813 00:52:04.035330 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64f065acf63904b16a7e7e14b1c86fb0b80d3b9b3f85876a756fdf106794ff75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:52:04.352572 sshd[5018]: Accepted publickey for core from 147.75.109.163 port 49026 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:04.354049 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:04.360603 systemd-logind[1528]: New session 17 of user core. Aug 13 00:52:04.364652 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:52:04.659649 sshd[5025]: Connection closed by 147.75.109.163 port 49026 Aug 13 00:52:04.660176 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:04.664177 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:52:04.664646 systemd[1]: sshd@20-172.234.199.101:22-147.75.109.163:49026.service: Deactivated successfully. Aug 13 00:52:04.667001 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:52:04.668411 systemd-logind[1528]: Removed session 17. Aug 13 00:52:04.956462 containerd[1575]: time="2025-08-13T00:52:04.956195159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:05.006193 containerd[1575]: time="2025-08-13T00:52:05.006138950Z" level=error msg="Failed to destroy network for sandbox \"0ea22d58a77c257ec9bbebb2a81791d340fe4f0b958fbaf24d6d4c7b0d5c8a8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:05.008630 systemd[1]: run-netns-cni\x2d1a17a35a\x2dbfa4\x2d2af1\x2dbc95\x2db784275babc5.mount: Deactivated successfully. Aug 13 00:52:05.009661 containerd[1575]: time="2025-08-13T00:52:05.009612657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea22d58a77c257ec9bbebb2a81791d340fe4f0b958fbaf24d6d4c7b0d5c8a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:05.010197 kubelet[2714]: E0813 00:52:05.009938 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea22d58a77c257ec9bbebb2a81791d340fe4f0b958fbaf24d6d4c7b0d5c8a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:05.010197 kubelet[2714]: E0813 00:52:05.009999 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea22d58a77c257ec9bbebb2a81791d340fe4f0b958fbaf24d6d4c7b0d5c8a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:52:05.010197 kubelet[2714]: E0813 00:52:05.010020 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ea22d58a77c257ec9bbebb2a81791d340fe4f0b958fbaf24d6d4c7b0d5c8a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:52:05.010197 kubelet[2714]: E0813 00:52:05.010067 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ea22d58a77c257ec9bbebb2a81791d340fe4f0b958fbaf24d6d4c7b0d5c8a8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:52:09.719967 systemd[1]: Started sshd@21-172.234.199.101:22-147.75.109.163:53518.service - OpenSSH per-connection server daemon (147.75.109.163:53518). Aug 13 00:52:09.956193 kubelet[2714]: E0813 00:52:09.955857 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:09.956976 containerd[1575]: time="2025-08-13T00:52:09.956786668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:10.006057 containerd[1575]: time="2025-08-13T00:52:10.005950922Z" level=error msg="Failed to destroy network for sandbox \"7007ba7da8c67bf7743b38b659a05eafa83163744010a8af8fc7147ecf9021be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:10.009300 systemd[1]: run-netns-cni\x2d78111db0\x2d21c3\x2dbfed\x2d62b1\x2d9be352d28445.mount: Deactivated successfully. Aug 13 00:52:10.009780 containerd[1575]: time="2025-08-13T00:52:10.009467718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7007ba7da8c67bf7743b38b659a05eafa83163744010a8af8fc7147ecf9021be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:10.010215 kubelet[2714]: E0813 00:52:10.009903 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7007ba7da8c67bf7743b38b659a05eafa83163744010a8af8fc7147ecf9021be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:10.010215 kubelet[2714]: E0813 00:52:10.009951 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7007ba7da8c67bf7743b38b659a05eafa83163744010a8af8fc7147ecf9021be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:52:10.010215 kubelet[2714]: E0813 00:52:10.009969 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7007ba7da8c67bf7743b38b659a05eafa83163744010a8af8fc7147ecf9021be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:52:10.010215 kubelet[2714]: E0813 00:52:10.010014 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7007ba7da8c67bf7743b38b659a05eafa83163744010a8af8fc7147ecf9021be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:52:10.063506 sshd[5066]: Accepted publickey for core from 147.75.109.163 port 53518 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:10.064867 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:10.070000 systemd-logind[1528]: New session 18 of user core. Aug 13 00:52:10.075785 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:52:10.408141 sshd[5094]: Connection closed by 147.75.109.163 port 53518 Aug 13 00:52:10.407088 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:10.411422 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:52:10.414724 systemd[1]: sshd@21-172.234.199.101:22-147.75.109.163:53518.service: Deactivated successfully. Aug 13 00:52:10.418455 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:52:10.422109 systemd-logind[1528]: Removed session 18. Aug 13 00:52:13.956343 kubelet[2714]: E0813 00:52:13.955975 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:13.957232 containerd[1575]: time="2025-08-13T00:52:13.957171078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:14.008304 containerd[1575]: time="2025-08-13T00:52:14.008227961Z" level=error msg="Failed to destroy network for sandbox \"9a1851cfa5c0ace4210d6cec7cb9b7bab9d81b8d67a1d26d02a0a30cc58d9807\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:14.010060 containerd[1575]: time="2025-08-13T00:52:14.009930954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1851cfa5c0ace4210d6cec7cb9b7bab9d81b8d67a1d26d02a0a30cc58d9807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:14.011047 kubelet[2714]: E0813 00:52:14.010917 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1851cfa5c0ace4210d6cec7cb9b7bab9d81b8d67a1d26d02a0a30cc58d9807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:14.012290 kubelet[2714]: E0813 00:52:14.011244 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1851cfa5c0ace4210d6cec7cb9b7bab9d81b8d67a1d26d02a0a30cc58d9807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:52:14.012085 systemd[1]: run-netns-cni\x2db3225007\x2d68bc\x2d07bb\x2da7eb\x2d62189fdf1065.mount: Deactivated successfully. Aug 13 00:52:14.012780 kubelet[2714]: E0813 00:52:14.011267 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1851cfa5c0ace4210d6cec7cb9b7bab9d81b8d67a1d26d02a0a30cc58d9807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:52:14.012780 kubelet[2714]: E0813 00:52:14.012649 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a1851cfa5c0ace4210d6cec7cb9b7bab9d81b8d67a1d26d02a0a30cc58d9807\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:52:14.956335 kubelet[2714]: E0813 00:52:14.956290 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:52:15.466707 systemd[1]: Started sshd@22-172.234.199.101:22-147.75.109.163:53532.service - OpenSSH per-connection server daemon (147.75.109.163:53532). Aug 13 00:52:15.807159 sshd[5136]: Accepted publickey for core from 147.75.109.163 port 53532 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:15.808439 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:15.813823 systemd-logind[1528]: New session 19 of user core. Aug 13 00:52:15.820110 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:52:15.955501 kubelet[2714]: E0813 00:52:15.955234 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:16.105878 sshd[5138]: Connection closed by 147.75.109.163 port 53532 Aug 13 00:52:16.106390 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:16.111286 systemd[1]: sshd@22-172.234.199.101:22-147.75.109.163:53532.service: Deactivated successfully. Aug 13 00:52:16.113326 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:52:16.114767 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:52:16.116960 systemd-logind[1528]: Removed session 19. Aug 13 00:52:17.956952 containerd[1575]: time="2025-08-13T00:52:17.956852479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:18.010086 containerd[1575]: time="2025-08-13T00:52:18.010022802Z" level=error msg="Failed to destroy network for sandbox \"3fdb9d1362180dc49bf2236329e43cd2c30a656fd5cdabe40f8d9d9287169d07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:18.012022 systemd[1]: run-netns-cni\x2d7ddf2f0e\x2de738\x2df7fb\x2dfd52\x2db1178cea0a11.mount: Deactivated successfully. Aug 13 00:52:18.015361 containerd[1575]: time="2025-08-13T00:52:18.015315661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdb9d1362180dc49bf2236329e43cd2c30a656fd5cdabe40f8d9d9287169d07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:18.016253 kubelet[2714]: E0813 00:52:18.015733 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdb9d1362180dc49bf2236329e43cd2c30a656fd5cdabe40f8d9d9287169d07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:18.016253 kubelet[2714]: E0813 00:52:18.015785 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdb9d1362180dc49bf2236329e43cd2c30a656fd5cdabe40f8d9d9287169d07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:52:18.016253 kubelet[2714]: E0813 00:52:18.015803 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdb9d1362180dc49bf2236329e43cd2c30a656fd5cdabe40f8d9d9287169d07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:52:18.016951 kubelet[2714]: E0813 00:52:18.016639 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fdb9d1362180dc49bf2236329e43cd2c30a656fd5cdabe40f8d9d9287169d07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:52:18.955861 kubelet[2714]: E0813 00:52:18.955735 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:18.956130 containerd[1575]: time="2025-08-13T00:52:18.956089629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:19.005578 containerd[1575]: time="2025-08-13T00:52:19.005489295Z" level=error msg="Failed to destroy network for sandbox \"9f63f267a48da4c2324efe0b2c4aef6512f30abc642761f182ace390183fbed5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:19.006610 containerd[1575]: time="2025-08-13T00:52:19.006512587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f63f267a48da4c2324efe0b2c4aef6512f30abc642761f182ace390183fbed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:19.009231 systemd[1]: run-netns-cni\x2df5d6a419\x2d8422\x2d336c\x2d568f\x2d901e7e9c9062.mount: Deactivated successfully. Aug 13 00:52:19.009646 kubelet[2714]: E0813 00:52:19.009499 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f63f267a48da4c2324efe0b2c4aef6512f30abc642761f182ace390183fbed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:19.009646 kubelet[2714]: E0813 00:52:19.009598 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f63f267a48da4c2324efe0b2c4aef6512f30abc642761f182ace390183fbed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:52:19.009646 kubelet[2714]: E0813 00:52:19.009618 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f63f267a48da4c2324efe0b2c4aef6512f30abc642761f182ace390183fbed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:52:19.009834 kubelet[2714]: E0813 00:52:19.009789 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f63f267a48da4c2324efe0b2c4aef6512f30abc642761f182ace390183fbed5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:52:21.174827 systemd[1]: Started sshd@23-172.234.199.101:22-147.75.109.163:53150.service - OpenSSH per-connection server daemon (147.75.109.163:53150). Aug 13 00:52:21.522403 sshd[5200]: Accepted publickey for core from 147.75.109.163 port 53150 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:21.524031 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:21.529186 systemd-logind[1528]: New session 20 of user core. Aug 13 00:52:21.532654 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:52:21.839338 sshd[5202]: Connection closed by 147.75.109.163 port 53150 Aug 13 00:52:21.839941 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:21.846426 systemd[1]: sshd@23-172.234.199.101:22-147.75.109.163:53150.service: Deactivated successfully. Aug 13 00:52:21.850927 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:52:21.852887 systemd-logind[1528]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:52:21.854492 systemd-logind[1528]: Removed session 20. Aug 13 00:52:23.956146 kubelet[2714]: E0813 00:52:23.955435 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:23.956146 kubelet[2714]: E0813 00:52:23.955931 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:23.956778 containerd[1575]: time="2025-08-13T00:52:23.956726366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:24.015700 containerd[1575]: time="2025-08-13T00:52:24.015581994Z" level=error msg="Failed to destroy network for sandbox \"208e3b5083231d134465a5670bf7f0a0cff311870d60c6f7583e2bb36df74916\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:24.017453 systemd[1]: run-netns-cni\x2d2cf9b6f4\x2d48f4\x2d2849\x2d7611\x2db4752192f9de.mount: Deactivated successfully. Aug 13 00:52:24.019607 containerd[1575]: time="2025-08-13T00:52:24.019580660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"208e3b5083231d134465a5670bf7f0a0cff311870d60c6f7583e2bb36df74916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:24.020548 kubelet[2714]: E0813 00:52:24.019915 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"208e3b5083231d134465a5670bf7f0a0cff311870d60c6f7583e2bb36df74916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:24.020548 kubelet[2714]: E0813 00:52:24.019967 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"208e3b5083231d134465a5670bf7f0a0cff311870d60c6f7583e2bb36df74916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:52:24.020548 kubelet[2714]: E0813 00:52:24.019990 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"208e3b5083231d134465a5670bf7f0a0cff311870d60c6f7583e2bb36df74916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:52:24.020548 kubelet[2714]: E0813 00:52:24.020037 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"208e3b5083231d134465a5670bf7f0a0cff311870d60c6f7583e2bb36df74916\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:52:24.955621 kubelet[2714]: E0813 00:52:24.955454 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:24.956505 containerd[1575]: time="2025-08-13T00:52:24.956452967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:25.011542 containerd[1575]: time="2025-08-13T00:52:25.011451257Z" level=error msg="Failed to destroy network for sandbox \"4ed73cb896661d2b7da59ea00475da8e74385f2aa2b706f726505eb1258fdb9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:25.013688 systemd[1]: run-netns-cni\x2d53d0815e\x2df82b\x2df657\x2deba7\x2d4ade7268fbb9.mount: Deactivated successfully. Aug 13 00:52:25.015918 containerd[1575]: time="2025-08-13T00:52:25.015828165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed73cb896661d2b7da59ea00475da8e74385f2aa2b706f726505eb1258fdb9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:25.016917 kubelet[2714]: E0813 00:52:25.016886 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed73cb896661d2b7da59ea00475da8e74385f2aa2b706f726505eb1258fdb9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:25.017154 kubelet[2714]: E0813 00:52:25.016935 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed73cb896661d2b7da59ea00475da8e74385f2aa2b706f726505eb1258fdb9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:52:25.017154 kubelet[2714]: E0813 00:52:25.016953 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed73cb896661d2b7da59ea00475da8e74385f2aa2b706f726505eb1258fdb9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:52:25.017154 kubelet[2714]: E0813 00:52:25.016988 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ed73cb896661d2b7da59ea00475da8e74385f2aa2b706f726505eb1258fdb9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:52:26.899065 systemd[1]: Started sshd@24-172.234.199.101:22-147.75.109.163:53158.service - OpenSSH per-connection server daemon (147.75.109.163:53158). Aug 13 00:52:27.238453 sshd[5267]: Accepted publickey for core from 147.75.109.163 port 53158 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:27.239767 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:27.244622 systemd-logind[1528]: New session 21 of user core. Aug 13 00:52:27.249652 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:52:27.541965 sshd[5269]: Connection closed by 147.75.109.163 port 53158 Aug 13 00:52:27.542825 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:27.546562 systemd[1]: sshd@24-172.234.199.101:22-147.75.109.163:53158.service: Deactivated successfully. Aug 13 00:52:27.549012 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:52:27.550031 systemd-logind[1528]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:52:27.552073 systemd-logind[1528]: Removed session 21. Aug 13 00:52:29.956554 containerd[1575]: time="2025-08-13T00:52:29.956321781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:29.957456 kubelet[2714]: E0813 00:52:29.956772 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:52:30.021065 containerd[1575]: time="2025-08-13T00:52:30.021010133Z" level=error msg="Failed to destroy network for sandbox \"613dda265a32b3c51ea3a8c3d0a274f288e61fa492f02b8275eaa50f43632783\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:30.023469 systemd[1]: run-netns-cni\x2d9f8ef1ba\x2dd4cb\x2d36a2\x2d31a7\x2d7e7e32297e38.mount: Deactivated successfully. Aug 13 00:52:30.024513 containerd[1575]: time="2025-08-13T00:52:30.022502695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"613dda265a32b3c51ea3a8c3d0a274f288e61fa492f02b8275eaa50f43632783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:30.025085 kubelet[2714]: E0813 00:52:30.024648 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"613dda265a32b3c51ea3a8c3d0a274f288e61fa492f02b8275eaa50f43632783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:30.025085 kubelet[2714]: E0813 00:52:30.024702 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"613dda265a32b3c51ea3a8c3d0a274f288e61fa492f02b8275eaa50f43632783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:52:30.025085 kubelet[2714]: E0813 00:52:30.024721 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"613dda265a32b3c51ea3a8c3d0a274f288e61fa492f02b8275eaa50f43632783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:52:30.025085 kubelet[2714]: E0813 00:52:30.024768 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"613dda265a32b3c51ea3a8c3d0a274f288e61fa492f02b8275eaa50f43632783\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:52:30.955389 kubelet[2714]: E0813 00:52:30.955284 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:30.955389 kubelet[2714]: E0813 00:52:30.955342 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:32.613725 systemd[1]: Started sshd@25-172.234.199.101:22-147.75.109.163:36428.service - OpenSSH per-connection server daemon (147.75.109.163:36428). Aug 13 00:52:32.957095 sshd[5308]: Accepted publickey for core from 147.75.109.163 port 36428 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:32.958545 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:32.965129 systemd-logind[1528]: New session 22 of user core. Aug 13 00:52:32.972841 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:52:33.271161 sshd[5310]: Connection closed by 147.75.109.163 port 36428 Aug 13 00:52:33.272009 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:33.278402 systemd-logind[1528]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:52:33.279258 systemd[1]: sshd@25-172.234.199.101:22-147.75.109.163:36428.service: Deactivated successfully. Aug 13 00:52:33.281261 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:52:33.284193 systemd-logind[1528]: Removed session 22. Aug 13 00:52:33.955896 containerd[1575]: time="2025-08-13T00:52:33.955742061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:34.012910 containerd[1575]: time="2025-08-13T00:52:34.012852818Z" level=error msg="Failed to destroy network for sandbox \"4ecafeafd7a7e8e7f695eaeb38567dfc164589c5c161f16f57b4bbaf11b93953\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:34.016353 systemd[1]: run-netns-cni\x2db33f49df\x2de239\x2df4f6\x2da2f1\x2dfc5fe1d0f323.mount: Deactivated successfully. Aug 13 00:52:34.020480 containerd[1575]: time="2025-08-13T00:52:34.020445450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ecafeafd7a7e8e7f695eaeb38567dfc164589c5c161f16f57b4bbaf11b93953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:34.021278 kubelet[2714]: E0813 00:52:34.021203 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ecafeafd7a7e8e7f695eaeb38567dfc164589c5c161f16f57b4bbaf11b93953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:34.021578 kubelet[2714]: E0813 00:52:34.021293 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ecafeafd7a7e8e7f695eaeb38567dfc164589c5c161f16f57b4bbaf11b93953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:52:34.021578 kubelet[2714]: E0813 00:52:34.021346 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ecafeafd7a7e8e7f695eaeb38567dfc164589c5c161f16f57b4bbaf11b93953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:52:34.021578 kubelet[2714]: E0813 00:52:34.021506 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ecafeafd7a7e8e7f695eaeb38567dfc164589c5c161f16f57b4bbaf11b93953\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:52:34.954952 kubelet[2714]: E0813 00:52:34.954918 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:34.955545 containerd[1575]: time="2025-08-13T00:52:34.955334654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:35.018005 containerd[1575]: time="2025-08-13T00:52:35.017954259Z" level=error msg="Failed to destroy network for sandbox \"8b2395363d5adcf1eebfc08ccce650165d5376b0e07397f25992c7b5d1b49b43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:35.020594 containerd[1575]: time="2025-08-13T00:52:35.019576551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2395363d5adcf1eebfc08ccce650165d5376b0e07397f25992c7b5d1b49b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:35.020800 kubelet[2714]: E0813 00:52:35.020759 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2395363d5adcf1eebfc08ccce650165d5376b0e07397f25992c7b5d1b49b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:35.020938 kubelet[2714]: E0813 00:52:35.020888 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2395363d5adcf1eebfc08ccce650165d5376b0e07397f25992c7b5d1b49b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:52:35.020938 kubelet[2714]: E0813 00:52:35.020909 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2395363d5adcf1eebfc08ccce650165d5376b0e07397f25992c7b5d1b49b43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:52:35.021075 kubelet[2714]: E0813 00:52:35.021027 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b2395363d5adcf1eebfc08ccce650165d5376b0e07397f25992c7b5d1b49b43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:52:35.021104 systemd[1]: run-netns-cni\x2df3d4a0f8\x2dc678\x2d9804\x2d1ae6\x2da57620ab8db9.mount: Deactivated successfully. Aug 13 00:52:38.342379 systemd[1]: Started sshd@26-172.234.199.101:22-147.75.109.163:50088.service - OpenSSH per-connection server daemon (147.75.109.163:50088). Aug 13 00:52:38.693001 sshd[5378]: Accepted publickey for core from 147.75.109.163 port 50088 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:38.694374 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:38.699437 systemd-logind[1528]: New session 23 of user core. Aug 13 00:52:38.704641 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:52:38.955850 kubelet[2714]: E0813 00:52:38.955559 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:38.956500 containerd[1575]: time="2025-08-13T00:52:38.955874786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:39.021548 containerd[1575]: time="2025-08-13T00:52:39.019834851Z" level=error msg="Failed to destroy network for sandbox \"9c5fec9c36a6f72629b39fa3f4ee780fd79edd78321279902e8439276229ffa4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:39.022446 containerd[1575]: time="2025-08-13T00:52:39.022409155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5fec9c36a6f72629b39fa3f4ee780fd79edd78321279902e8439276229ffa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:39.023622 systemd[1]: run-netns-cni\x2daa2acf6a\x2d88a7\x2d675b\x2d24db\x2dacc95f94d503.mount: Deactivated successfully. Aug 13 00:52:39.024171 kubelet[2714]: E0813 00:52:39.024088 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5fec9c36a6f72629b39fa3f4ee780fd79edd78321279902e8439276229ffa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:39.024171 kubelet[2714]: E0813 00:52:39.024150 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5fec9c36a6f72629b39fa3f4ee780fd79edd78321279902e8439276229ffa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:52:39.024171 kubelet[2714]: E0813 00:52:39.024168 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5fec9c36a6f72629b39fa3f4ee780fd79edd78321279902e8439276229ffa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:52:39.024338 kubelet[2714]: E0813 00:52:39.024209 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c5fec9c36a6f72629b39fa3f4ee780fd79edd78321279902e8439276229ffa4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:52:39.043298 sshd[5380]: Connection closed by 147.75.109.163 port 50088 Aug 13 00:52:39.045678 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:39.050312 systemd-logind[1528]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:52:39.051332 systemd[1]: sshd@26-172.234.199.101:22-147.75.109.163:50088.service: Deactivated successfully. Aug 13 00:52:39.054117 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:52:39.058717 systemd-logind[1528]: Removed session 23. Aug 13 00:52:42.956022 containerd[1575]: time="2025-08-13T00:52:42.955961229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:42.956843 kubelet[2714]: E0813 00:52:42.956604 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-x7x94" podUID="ab709cf9-e61c-420b-90c5-1c0355308621" Aug 13 00:52:43.006250 containerd[1575]: time="2025-08-13T00:52:43.006190792Z" level=error msg="Failed to destroy network for sandbox \"b67042b13b4de7f400351b6f1a53842b34f471e1e9721a5bce25ea37b102dea8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:43.008070 systemd[1]: run-netns-cni\x2d5bf5200e\x2d23ff\x2dca2d\x2dde01\x2d8d5cb082a9fb.mount: Deactivated successfully. Aug 13 00:52:43.010460 containerd[1575]: time="2025-08-13T00:52:43.010417878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67042b13b4de7f400351b6f1a53842b34f471e1e9721a5bce25ea37b102dea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:43.012004 kubelet[2714]: E0813 00:52:43.010977 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67042b13b4de7f400351b6f1a53842b34f471e1e9721a5bce25ea37b102dea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:43.012004 kubelet[2714]: E0813 00:52:43.011044 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67042b13b4de7f400351b6f1a53842b34f471e1e9721a5bce25ea37b102dea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:52:43.012004 kubelet[2714]: E0813 00:52:43.011061 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67042b13b4de7f400351b6f1a53842b34f471e1e9721a5bce25ea37b102dea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:52:43.012004 kubelet[2714]: E0813 00:52:43.011107 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b67042b13b4de7f400351b6f1a53842b34f471e1e9721a5bce25ea37b102dea8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:52:44.103481 systemd[1]: Started sshd@27-172.234.199.101:22-147.75.109.163:50094.service - OpenSSH per-connection server daemon (147.75.109.163:50094). Aug 13 00:52:44.440564 sshd[5446]: Accepted publickey for core from 147.75.109.163 port 50094 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:44.441868 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:44.446624 systemd-logind[1528]: New session 24 of user core. Aug 13 00:52:44.454653 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:52:44.754195 sshd[5448]: Connection closed by 147.75.109.163 port 50094 Aug 13 00:52:44.754795 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:44.760344 systemd[1]: sshd@27-172.234.199.101:22-147.75.109.163:50094.service: Deactivated successfully. Aug 13 00:52:44.762931 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:52:44.765684 systemd-logind[1528]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:52:44.767861 systemd-logind[1528]: Removed session 24. Aug 13 00:52:44.955733 containerd[1575]: time="2025-08-13T00:52:44.955696294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:45.007592 containerd[1575]: time="2025-08-13T00:52:45.007336798Z" level=error msg="Failed to destroy network for sandbox \"5308c512335bd0f455e54f582a4c159db7ab25d04bfb3a61fcb6b8105f64ee08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:45.011004 containerd[1575]: time="2025-08-13T00:52:45.010950233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5308c512335bd0f455e54f582a4c159db7ab25d04bfb3a61fcb6b8105f64ee08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:45.011394 kubelet[2714]: E0813 00:52:45.011341 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5308c512335bd0f455e54f582a4c159db7ab25d04bfb3a61fcb6b8105f64ee08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:45.011724 kubelet[2714]: E0813 00:52:45.011406 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5308c512335bd0f455e54f582a4c159db7ab25d04bfb3a61fcb6b8105f64ee08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:52:45.011724 kubelet[2714]: E0813 00:52:45.011425 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5308c512335bd0f455e54f582a4c159db7ab25d04bfb3a61fcb6b8105f64ee08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:52:45.011784 kubelet[2714]: E0813 00:52:45.011513 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5308c512335bd0f455e54f582a4c159db7ab25d04bfb3a61fcb6b8105f64ee08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:52:45.012377 systemd[1]: run-netns-cni\x2d21a7638e\x2d9d29\x2d578d\x2de547\x2d395d33757916.mount: Deactivated successfully. Aug 13 00:52:49.823723 systemd[1]: Started sshd@28-172.234.199.101:22-147.75.109.163:55736.service - OpenSSH per-connection server daemon (147.75.109.163:55736). Aug 13 00:52:49.955544 kubelet[2714]: E0813 00:52:49.955493 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:49.956873 containerd[1575]: time="2025-08-13T00:52:49.956798905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:50.015733 containerd[1575]: time="2025-08-13T00:52:50.015698238Z" level=error msg="Failed to destroy network for sandbox \"5df8dbf2a6b7d3b4c002d93d1d82c930b67de82deee15f721fb5f94bd1bb9e4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.019717 systemd[1]: run-netns-cni\x2df0f22855\x2d8f63\x2d1b41\x2dd71f\x2d22a6bcf63dbe.mount: Deactivated successfully. Aug 13 00:52:50.020273 containerd[1575]: time="2025-08-13T00:52:50.020211274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df8dbf2a6b7d3b4c002d93d1d82c930b67de82deee15f721fb5f94bd1bb9e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.020716 kubelet[2714]: E0813 00:52:50.020602 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df8dbf2a6b7d3b4c002d93d1d82c930b67de82deee15f721fb5f94bd1bb9e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.020716 kubelet[2714]: E0813 00:52:50.020652 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df8dbf2a6b7d3b4c002d93d1d82c930b67de82deee15f721fb5f94bd1bb9e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:52:50.020716 kubelet[2714]: E0813 00:52:50.020669 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df8dbf2a6b7d3b4c002d93d1d82c930b67de82deee15f721fb5f94bd1bb9e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:52:50.020716 kubelet[2714]: E0813 00:52:50.020706 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5df8dbf2a6b7d3b4c002d93d1d82c930b67de82deee15f721fb5f94bd1bb9e4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:52:50.161614 sshd[5488]: Accepted publickey for core from 147.75.109.163 port 55736 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:50.163422 sshd-session[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:50.169892 systemd-logind[1528]: New session 25 of user core. Aug 13 00:52:50.179650 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:52:50.464117 sshd[5516]: Connection closed by 147.75.109.163 port 55736 Aug 13 00:52:50.464850 sshd-session[5488]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:50.469674 systemd[1]: sshd@28-172.234.199.101:22-147.75.109.163:55736.service: Deactivated successfully. Aug 13 00:52:50.471938 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:52:50.473327 systemd-logind[1528]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:52:50.475302 systemd-logind[1528]: Removed session 25. Aug 13 00:52:52.955830 kubelet[2714]: E0813 00:52:52.955798 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:52:52.956343 containerd[1575]: time="2025-08-13T00:52:52.956235324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:53.011738 containerd[1575]: time="2025-08-13T00:52:53.011688060Z" level=error msg="Failed to destroy network for sandbox \"56f85945d88f29b862f27d54e77f195b32275bb5529b0407f0dd82f576b7e0bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:53.015239 systemd[1]: run-netns-cni\x2dfc81016b\x2d62b9\x2def4c\x2d7ff3\x2d6c26316d16af.mount: Deactivated successfully. Aug 13 00:52:53.016649 containerd[1575]: time="2025-08-13T00:52:53.016567047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f85945d88f29b862f27d54e77f195b32275bb5529b0407f0dd82f576b7e0bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:53.016947 kubelet[2714]: E0813 00:52:53.016918 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f85945d88f29b862f27d54e77f195b32275bb5529b0407f0dd82f576b7e0bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:53.017002 kubelet[2714]: E0813 00:52:53.016970 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f85945d88f29b862f27d54e77f195b32275bb5529b0407f0dd82f576b7e0bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:52:53.017002 kubelet[2714]: E0813 00:52:53.016990 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f85945d88f29b862f27d54e77f195b32275bb5529b0407f0dd82f576b7e0bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:52:53.017050 kubelet[2714]: E0813 00:52:53.017025 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dnlsw_kube-system(fd8972e4-10f5-4f13-8b21-de07e7f562ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56f85945d88f29b862f27d54e77f195b32275bb5529b0407f0dd82f576b7e0bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podUID="fd8972e4-10f5-4f13-8b21-de07e7f562ab" Aug 13 00:52:54.955647 containerd[1575]: time="2025-08-13T00:52:54.955609310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:55.001234 containerd[1575]: time="2025-08-13T00:52:55.001179850Z" level=error msg="Failed to destroy network for sandbox \"61f63151df7fdcaf6244e1c570a15562529f13d3f16c68513c8b2db68f473fd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:55.003003 systemd[1]: run-netns-cni\x2d796c5cbf\x2dd1fb\x2daee0\x2dd9d4\x2df63451a70e76.mount: Deactivated successfully. Aug 13 00:52:55.004799 containerd[1575]: time="2025-08-13T00:52:55.004696601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f63151df7fdcaf6244e1c570a15562529f13d3f16c68513c8b2db68f473fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:55.004968 kubelet[2714]: E0813 00:52:55.004928 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f63151df7fdcaf6244e1c570a15562529f13d3f16c68513c8b2db68f473fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:55.005407 kubelet[2714]: E0813 00:52:55.005001 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f63151df7fdcaf6244e1c570a15562529f13d3f16c68513c8b2db68f473fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:52:55.005407 kubelet[2714]: E0813 00:52:55.005033 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f63151df7fdcaf6244e1c570a15562529f13d3f16c68513c8b2db68f473fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:52:55.005407 kubelet[2714]: E0813 00:52:55.005098 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mmxc6_calico-system(7697ce71-aa40-4c78-acaa-c59079720a2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61f63151df7fdcaf6244e1c570a15562529f13d3f16c68513c8b2db68f473fd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mmxc6" podUID="7697ce71-aa40-4c78-acaa-c59079720a2c" Aug 13 00:52:55.528749 systemd[1]: Started sshd@29-172.234.199.101:22-147.75.109.163:55746.service - OpenSSH per-connection server daemon (147.75.109.163:55746). Aug 13 00:52:55.880315 sshd[5581]: Accepted publickey for core from 147.75.109.163 port 55746 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:55.881846 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:55.887371 systemd-logind[1528]: New session 26 of user core. Aug 13 00:52:55.895642 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:52:56.188683 sshd[5583]: Connection closed by 147.75.109.163 port 55746 Aug 13 00:52:56.189414 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:56.194457 systemd[1]: sshd@29-172.234.199.101:22-147.75.109.163:55746.service: Deactivated successfully. Aug 13 00:52:56.197507 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:52:56.198775 systemd-logind[1528]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:52:56.200360 systemd-logind[1528]: Removed session 26. Aug 13 00:52:56.956691 containerd[1575]: time="2025-08-13T00:52:56.956648486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:52:58.955408 containerd[1575]: time="2025-08-13T00:52:58.955358768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:59.047395 containerd[1575]: time="2025-08-13T00:52:59.047349775Z" level=error msg="Failed to destroy network for sandbox \"cbd5e84e42d035f59fcfb1fa9a5341add5181e8f9e2b78e3cb3307396e943649\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:59.049406 containerd[1575]: time="2025-08-13T00:52:59.049001958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd5e84e42d035f59fcfb1fa9a5341add5181e8f9e2b78e3cb3307396e943649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:59.049801 kubelet[2714]: E0813 00:52:59.049722 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd5e84e42d035f59fcfb1fa9a5341add5181e8f9e2b78e3cb3307396e943649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:59.050828 kubelet[2714]: E0813 00:52:59.049895 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd5e84e42d035f59fcfb1fa9a5341add5181e8f9e2b78e3cb3307396e943649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:52:59.050828 kubelet[2714]: E0813 00:52:59.049917 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbd5e84e42d035f59fcfb1fa9a5341add5181e8f9e2b78e3cb3307396e943649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:52:59.050828 kubelet[2714]: E0813 00:52:59.050397 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbd5e84e42d035f59fcfb1fa9a5341add5181e8f9e2b78e3cb3307396e943649\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:52:59.050791 systemd[1]: run-netns-cni\x2d38e76f9e\x2d74cb\x2d472d\x2dd051\x2dc3aae7a0c6e3.mount: Deactivated successfully. Aug 13 00:53:00.955690 kubelet[2714]: E0813 00:53:00.955651 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:00.957570 containerd[1575]: time="2025-08-13T00:53:00.957150107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:01.038039 containerd[1575]: time="2025-08-13T00:53:01.037996581Z" level=error msg="Failed to destroy network for sandbox \"335336e40a555972fa12e99fb267c391adbb7e55e7d78edfd3d29c6ae482e23a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:53:01.040370 systemd[1]: run-netns-cni\x2d1314712c\x2da3f2\x2d76c7\x2d1f63\x2da48967113973.mount: Deactivated successfully. Aug 13 00:53:01.041881 containerd[1575]: time="2025-08-13T00:53:01.040417463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"335336e40a555972fa12e99fb267c391adbb7e55e7d78edfd3d29c6ae482e23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:53:01.041957 kubelet[2714]: E0813 00:53:01.041730 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"335336e40a555972fa12e99fb267c391adbb7e55e7d78edfd3d29c6ae482e23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:53:01.041957 kubelet[2714]: E0813 00:53:01.041787 2714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"335336e40a555972fa12e99fb267c391adbb7e55e7d78edfd3d29c6ae482e23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:53:01.041957 kubelet[2714]: E0813 00:53:01.041806 2714 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"335336e40a555972fa12e99fb267c391adbb7e55e7d78edfd3d29c6ae482e23a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:53:01.041957 kubelet[2714]: E0813 00:53:01.041844 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hxx58_kube-system(635fd78e-3d10-4a30-9894-3818897e1867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"335336e40a555972fa12e99fb267c391adbb7e55e7d78edfd3d29c6ae482e23a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hxx58" podUID="635fd78e-3d10-4a30-9894-3818897e1867" Aug 13 00:53:01.159115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858954840.mount: Deactivated successfully. Aug 13 00:53:01.194999 containerd[1575]: time="2025-08-13T00:53:01.194971723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:01.195910 containerd[1575]: time="2025-08-13T00:53:01.195862969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 00:53:01.196628 containerd[1575]: time="2025-08-13T00:53:01.196590388Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:01.198543 containerd[1575]: time="2025-08-13T00:53:01.198481578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:01.199183 containerd[1575]: time="2025-08-13T00:53:01.199078269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.242397794s" Aug 13 00:53:01.199183 containerd[1575]: time="2025-08-13T00:53:01.199102759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:53:01.215251 containerd[1575]: time="2025-08-13T00:53:01.215090100Z" level=info msg="CreateContainer within sandbox \"1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:53:01.223768 containerd[1575]: time="2025-08-13T00:53:01.223723276Z" level=info msg="Container 9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:53:01.233915 containerd[1575]: time="2025-08-13T00:53:01.233866819Z" level=info msg="CreateContainer within sandbox \"1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900\"" Aug 13 00:53:01.234454 containerd[1575]: time="2025-08-13T00:53:01.234425580Z" level=info msg="StartContainer for \"9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900\"" Aug 13 00:53:01.235746 containerd[1575]: time="2025-08-13T00:53:01.235702320Z" level=info msg="connecting to shim 9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900" address="unix:///run/containerd/s/2f64bef4556efd88bdc0bed0d4eac38e4fdfa25b706abb4b5ce44183cd686752" protocol=ttrpc version=3 Aug 13 00:53:01.251802 systemd[1]: Started sshd@30-172.234.199.101:22-147.75.109.163:32838.service - OpenSSH per-connection server daemon (147.75.109.163:32838). Aug 13 00:53:01.264998 systemd[1]: Started cri-containerd-9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900.scope - libcontainer container 9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900. Aug 13 00:53:01.336542 containerd[1575]: time="2025-08-13T00:53:01.335961353Z" level=info msg="StartContainer for \"9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900\" returns successfully" Aug 13 00:53:01.452084 kubelet[2714]: I0813 00:53:01.452031 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x7x94" podStartSLOduration=1.550120466 podStartE2EDuration="3m8.45201616s" podCreationTimestamp="2025-08-13 00:49:53 +0000 UTC" firstStartedPulling="2025-08-13 00:49:54.298318247 +0000 UTC m=+16.460138366" lastFinishedPulling="2025-08-13 00:53:01.200213941 +0000 UTC m=+203.362034060" observedRunningTime="2025-08-13 00:53:01.443512662 +0000 UTC m=+203.605332781" watchObservedRunningTime="2025-08-13 00:53:01.45201616 +0000 UTC m=+203.613836279" Aug 13 00:53:01.455679 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:53:01.455734 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:53:01.536941 containerd[1575]: time="2025-08-13T00:53:01.536064914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900\" id:\"f4e78c2fb689db64a41adf03dfcd37258752fa4e3d0b5696c56b370239f8da10\" pid:5714 exit_status:1 exited_at:{seconds:1755046381 nanos:534927922}" Aug 13 00:53:01.610709 sshd[5667]: Accepted publickey for core from 147.75.109.163 port 32838 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:01.613947 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:01.623662 systemd-logind[1528]: New session 27 of user core. Aug 13 00:53:01.629865 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:53:01.933094 sshd[5732]: Connection closed by 147.75.109.163 port 32838 Aug 13 00:53:01.933752 sshd-session[5667]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:01.938551 systemd-logind[1528]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:53:01.939302 systemd[1]: sshd@30-172.234.199.101:22-147.75.109.163:32838.service: Deactivated successfully. Aug 13 00:53:01.941392 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:53:01.943651 systemd-logind[1528]: Removed session 27. Aug 13 00:53:02.527222 containerd[1575]: time="2025-08-13T00:53:02.527122300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900\" id:\"5c41952c8958e7a896fad29a4bc7916a80c0f11085d35b2a9c2baa5975e592ad\" pid:5766 exit_status:1 exited_at:{seconds:1755046382 nanos:526765726}" Aug 13 00:53:03.566358 systemd-networkd[1474]: vxlan.calico: Link UP Aug 13 00:53:03.566367 systemd-networkd[1474]: vxlan.calico: Gained carrier Aug 13 00:53:03.784593 kubelet[2714]: I0813 00:53:03.784550 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:03.786529 kubelet[2714]: I0813 00:53:03.785994 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:53:03.788834 kubelet[2714]: I0813 00:53:03.788773 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:53:03.791504 kubelet[2714]: I0813 00:53:03.791331 2714 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93" size=25052538 runtimeHandler="" Aug 13 00:53:03.792304 containerd[1575]: time="2025-08-13T00:53:03.792146841Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:53:03.793542 containerd[1575]: time="2025-08-13T00:53:03.793495660Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:53:03.795390 containerd[1575]: time="2025-08-13T00:53:03.795307583Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\"" Aug 13 00:53:03.795695 containerd[1575]: time="2025-08-13T00:53:03.795663448Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" returns successfully" Aug 13 00:53:03.795979 containerd[1575]: time="2025-08-13T00:53:03.795962413Z" level=info msg="ImageDelete event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:53:03.814534 kubelet[2714]: I0813 00:53:03.814503 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:03.814695 kubelet[2714]: I0813 00:53:03.814678 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/csi-node-driver-mmxc6","calico-system/calico-typha-644589c98-5v7wp","kube-system/kube-controller-manager-172-234-199-101","calico-system/calico-node-x7x94","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:53:03.814796 kubelet[2714]: E0813 00:53:03.814784 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:53:03.814847 kubelet[2714]: E0813 00:53:03.814839 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:53:03.814969 kubelet[2714]: E0813 00:53:03.814886 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:53:03.814969 kubelet[2714]: E0813 00:53:03.814896 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:53:03.814969 kubelet[2714]: E0813 00:53:03.814907 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:53:03.814969 kubelet[2714]: E0813 00:53:03.814915 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:53:03.814969 kubelet[2714]: E0813 00:53:03.814923 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:53:03.814969 kubelet[2714]: E0813 00:53:03.814930 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:53:03.814969 kubelet[2714]: E0813 00:53:03.814940 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:53:03.814969 kubelet[2714]: E0813 00:53:03.814950 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:53:03.814969 kubelet[2714]: I0813 00:53:03.814960 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:53:04.892761 systemd-networkd[1474]: vxlan.calico: Gained IPv6LL Aug 13 00:53:06.957460 kubelet[2714]: E0813 00:53:06.957420 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:06.958221 containerd[1575]: time="2025-08-13T00:53:06.958067675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:06.996763 systemd[1]: Started sshd@31-172.234.199.101:22-147.75.109.163:32850.service - OpenSSH per-connection server daemon (147.75.109.163:32850). Aug 13 00:53:07.125856 systemd-networkd[1474]: cali5beadf05d39: Link UP Aug 13 00:53:07.127599 systemd-networkd[1474]: cali5beadf05d39: Gained carrier Aug 13 00:53:07.153928 containerd[1575]: 2025-08-13 00:53:07.034 [INFO][5966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0 coredns-7c65d6cfc9- kube-system fd8972e4-10f5-4f13-8b21-de07e7f562ab 800 0 2025-08-13 00:49:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-199-101 coredns-7c65d6cfc9-dnlsw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5beadf05d39 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dnlsw" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-" Aug 13 00:53:07.153928 containerd[1575]: 2025-08-13 00:53:07.034 [INFO][5966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dnlsw" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0" Aug 13 00:53:07.153928 containerd[1575]: 2025-08-13 00:53:07.068 [INFO][5980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" HandleID="k8s-pod-network.acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Workload="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0" Aug 13 00:53:07.154072 containerd[1575]: 2025-08-13 00:53:07.070 [INFO][5980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" HandleID="k8s-pod-network.acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Workload="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5310), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-199-101", "pod":"coredns-7c65d6cfc9-dnlsw", "timestamp":"2025-08-13 00:53:07.068621065 +0000 UTC"}, Hostname:"172-234-199-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:07.154072 containerd[1575]: 2025-08-13 00:53:07.070 [INFO][5980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:07.154072 containerd[1575]: 2025-08-13 00:53:07.070 [INFO][5980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:07.154072 containerd[1575]: 2025-08-13 00:53:07.070 [INFO][5980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-199-101' Aug 13 00:53:07.154072 containerd[1575]: 2025-08-13 00:53:07.077 [INFO][5980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" host="172-234-199-101" Aug 13 00:53:07.154072 containerd[1575]: 2025-08-13 00:53:07.084 [INFO][5980] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-199-101" Aug 13 00:53:07.154072 containerd[1575]: 2025-08-13 00:53:07.091 [INFO][5980] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:07.154072 containerd[1575]: 2025-08-13 00:53:07.093 [INFO][5980] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:07.154072 containerd[1575]: 2025-08-13 00:53:07.095 [INFO][5980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:07.154072 containerd[1575]: 2025-08-13 00:53:07.095 [INFO][5980] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" host="172-234-199-101" Aug 13 00:53:07.154292 containerd[1575]: 2025-08-13 00:53:07.097 [INFO][5980] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15 Aug 13 00:53:07.154292 containerd[1575]: 2025-08-13 00:53:07.101 [INFO][5980] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" host="172-234-199-101" Aug 13 00:53:07.154292 containerd[1575]: 2025-08-13 00:53:07.106 [INFO][5980] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.129/26] block=192.168.72.128/26 handle="k8s-pod-network.acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" host="172-234-199-101" Aug 13 00:53:07.154292 containerd[1575]: 2025-08-13 00:53:07.106 [INFO][5980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.129/26] handle="k8s-pod-network.acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" host="172-234-199-101" Aug 13 00:53:07.154292 containerd[1575]: 2025-08-13 00:53:07.106 [INFO][5980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:07.154292 containerd[1575]: 2025-08-13 00:53:07.106 [INFO][5980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.129/26] IPv6=[] ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" HandleID="k8s-pod-network.acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Workload="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0" Aug 13 00:53:07.154405 containerd[1575]: 2025-08-13 00:53:07.116 [INFO][5966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dnlsw" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fd8972e4-10f5-4f13-8b21-de07e7f562ab", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-101", ContainerID:"", Pod:"coredns-7c65d6cfc9-dnlsw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5beadf05d39", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:07.154405 containerd[1575]: 2025-08-13 00:53:07.116 [INFO][5966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.129/32] ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dnlsw" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0" Aug 13 00:53:07.154405 containerd[1575]: 2025-08-13 00:53:07.117 [INFO][5966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5beadf05d39 ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dnlsw" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0" Aug 13 00:53:07.154405 containerd[1575]: 2025-08-13 00:53:07.128 [INFO][5966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dnlsw" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0" Aug 13 00:53:07.154405 containerd[1575]: 2025-08-13 00:53:07.129 [INFO][5966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dnlsw" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fd8972e4-10f5-4f13-8b21-de07e7f562ab", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-101", ContainerID:"acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15", Pod:"coredns-7c65d6cfc9-dnlsw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5beadf05d39", MAC:"5a:b4:fe:be:32:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:07.154405 containerd[1575]: 2025-08-13 00:53:07.148 [INFO][5966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dnlsw" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--dnlsw-eth0" Aug 13 00:53:07.215931 containerd[1575]: time="2025-08-13T00:53:07.215787724Z" level=info msg="connecting to shim acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15" address="unix:///run/containerd/s/1f972386f0625f845cc04d957a56ad3e61671410c386f0b6e830e14e2dfcfe4e" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:53:07.251659 systemd[1]: Started cri-containerd-acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15.scope - libcontainer container acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15. Aug 13 00:53:07.299689 containerd[1575]: time="2025-08-13T00:53:07.299619763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dnlsw,Uid:fd8972e4-10f5-4f13-8b21-de07e7f562ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15\"" Aug 13 00:53:07.300511 kubelet[2714]: E0813 00:53:07.300488 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:07.301824 containerd[1575]: time="2025-08-13T00:53:07.301806182Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:53:07.367812 sshd[5976]: Accepted publickey for core from 147.75.109.163 port 32850 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:07.370585 sshd-session[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:07.376613 systemd-logind[1528]: New session 28 of user core. Aug 13 00:53:07.384655 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:53:07.686067 sshd[6047]: Connection closed by 147.75.109.163 port 32850 Aug 13 00:53:07.687724 sshd-session[5976]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:07.693438 systemd[1]: sshd@31-172.234.199.101:22-147.75.109.163:32850.service: Deactivated successfully. Aug 13 00:53:07.696462 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:53:07.698412 systemd-logind[1528]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:53:07.701241 systemd-logind[1528]: Removed session 28. Aug 13 00:53:08.349633 systemd-networkd[1474]: cali5beadf05d39: Gained IPv6LL Aug 13 00:53:08.536669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount464145268.mount: Deactivated successfully. Aug 13 00:53:09.307561 containerd[1575]: time="2025-08-13T00:53:09.307465877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:09.309316 containerd[1575]: time="2025-08-13T00:53:09.308872517Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:53:09.309316 containerd[1575]: time="2025-08-13T00:53:09.308957776Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:09.311877 containerd[1575]: time="2025-08-13T00:53:09.311750938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:09.312730 containerd[1575]: time="2025-08-13T00:53:09.312708974Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.010539517s" Aug 13 00:53:09.312809 containerd[1575]: time="2025-08-13T00:53:09.312794973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:53:09.317169 containerd[1575]: time="2025-08-13T00:53:09.317146763Z" level=info msg="CreateContainer within sandbox \"acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:53:09.324307 containerd[1575]: time="2025-08-13T00:53:09.323672123Z" level=info msg="Container 4a0671bfc93ba17396d61c8d3ec71f1c248cd06f40c31950f0eedbbe29874985: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:53:09.342651 containerd[1575]: time="2025-08-13T00:53:09.342600042Z" level=info msg="CreateContainer within sandbox \"acf1c09af5fbd9d1ee234c56f3f2ec708deb9f55b145b8b7d731965b7dd72a15\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a0671bfc93ba17396d61c8d3ec71f1c248cd06f40c31950f0eedbbe29874985\"" Aug 13 00:53:09.343220 containerd[1575]: time="2025-08-13T00:53:09.343156334Z" level=info msg="StartContainer for \"4a0671bfc93ba17396d61c8d3ec71f1c248cd06f40c31950f0eedbbe29874985\"" Aug 13 00:53:09.344204 containerd[1575]: time="2025-08-13T00:53:09.344167400Z" level=info msg="connecting to shim 4a0671bfc93ba17396d61c8d3ec71f1c248cd06f40c31950f0eedbbe29874985" address="unix:///run/containerd/s/1f972386f0625f845cc04d957a56ad3e61671410c386f0b6e830e14e2dfcfe4e" protocol=ttrpc version=3 Aug 13 00:53:09.377653 systemd[1]: Started cri-containerd-4a0671bfc93ba17396d61c8d3ec71f1c248cd06f40c31950f0eedbbe29874985.scope - libcontainer container 4a0671bfc93ba17396d61c8d3ec71f1c248cd06f40c31950f0eedbbe29874985. Aug 13 00:53:09.410633 containerd[1575]: time="2025-08-13T00:53:09.410579985Z" level=info msg="StartContainer for \"4a0671bfc93ba17396d61c8d3ec71f1c248cd06f40c31950f0eedbbe29874985\" returns successfully" Aug 13 00:53:09.430146 kubelet[2714]: E0813 00:53:09.430111 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:09.452261 kubelet[2714]: I0813 00:53:09.452211 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dnlsw" podStartSLOduration=204.438770074 podStartE2EDuration="3m26.452195391s" podCreationTimestamp="2025-08-13 00:49:43 +0000 UTC" firstStartedPulling="2025-08-13 00:53:07.301409598 +0000 UTC m=+209.463229717" lastFinishedPulling="2025-08-13 00:53:09.314834915 +0000 UTC m=+211.476655034" observedRunningTime="2025-08-13 00:53:09.445645561 +0000 UTC m=+211.607465680" watchObservedRunningTime="2025-08-13 00:53:09.452195391 +0000 UTC m=+211.614015510" Aug 13 00:53:09.955958 containerd[1575]: time="2025-08-13T00:53:09.955876105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,}" Aug 13 00:53:10.077722 systemd-networkd[1474]: cali1af29e950b4: Link UP Aug 13 00:53:10.078618 systemd-networkd[1474]: cali1af29e950b4: Gained carrier Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:09.997 [INFO][6153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--199--101-k8s-csi--node--driver--mmxc6-eth0 csi-node-driver- calico-system 7697ce71-aa40-4c78-acaa-c59079720a2c 704 0 2025-08-13 00:49:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-199-101 csi-node-driver-mmxc6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1af29e950b4 [] [] }} ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Namespace="calico-system" Pod="csi-node-driver-mmxc6" WorkloadEndpoint="172--234--199--101-k8s-csi--node--driver--mmxc6-" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:09.997 [INFO][6153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Namespace="calico-system" Pod="csi-node-driver-mmxc6" WorkloadEndpoint="172--234--199--101-k8s-csi--node--driver--mmxc6-eth0" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.032 [INFO][6165] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" HandleID="k8s-pod-network.b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Workload="172--234--199--101-k8s-csi--node--driver--mmxc6-eth0" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.033 [INFO][6165] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" HandleID="k8s-pod-network.b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Workload="172--234--199--101-k8s-csi--node--driver--mmxc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f020), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-199-101", "pod":"csi-node-driver-mmxc6", "timestamp":"2025-08-13 00:53:10.03287518 +0000 UTC"}, Hostname:"172-234-199-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.033 [INFO][6165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.033 [INFO][6165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.033 [INFO][6165] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-199-101' Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.044 [INFO][6165] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" host="172-234-199-101" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.049 [INFO][6165] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-199-101" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.053 [INFO][6165] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.055 [INFO][6165] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.057 [INFO][6165] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.057 [INFO][6165] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" host="172-234-199-101" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.059 [INFO][6165] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407 Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.064 [INFO][6165] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" host="172-234-199-101" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.070 [INFO][6165] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.130/26] block=192.168.72.128/26 handle="k8s-pod-network.b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" host="172-234-199-101" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.070 [INFO][6165] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.130/26] handle="k8s-pod-network.b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" host="172-234-199-101" Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.070 [INFO][6165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:10.102379 containerd[1575]: 2025-08-13 00:53:10.070 [INFO][6165] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.130/26] IPv6=[] ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" HandleID="k8s-pod-network.b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Workload="172--234--199--101-k8s-csi--node--driver--mmxc6-eth0" Aug 13 00:53:10.103110 containerd[1575]: 2025-08-13 00:53:10.074 [INFO][6153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Namespace="calico-system" Pod="csi-node-driver-mmxc6" WorkloadEndpoint="172--234--199--101-k8s-csi--node--driver--mmxc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--101-k8s-csi--node--driver--mmxc6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7697ce71-aa40-4c78-acaa-c59079720a2c", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 49, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-101", ContainerID:"", Pod:"csi-node-driver-mmxc6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1af29e950b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:10.103110 containerd[1575]: 2025-08-13 00:53:10.074 [INFO][6153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.130/32] ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Namespace="calico-system" Pod="csi-node-driver-mmxc6" WorkloadEndpoint="172--234--199--101-k8s-csi--node--driver--mmxc6-eth0" Aug 13 00:53:10.103110 containerd[1575]: 2025-08-13 00:53:10.074 [INFO][6153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1af29e950b4 ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Namespace="calico-system" Pod="csi-node-driver-mmxc6" WorkloadEndpoint="172--234--199--101-k8s-csi--node--driver--mmxc6-eth0" Aug 13 00:53:10.103110 containerd[1575]: 2025-08-13 00:53:10.079 [INFO][6153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Namespace="calico-system" Pod="csi-node-driver-mmxc6" WorkloadEndpoint="172--234--199--101-k8s-csi--node--driver--mmxc6-eth0" Aug 13 00:53:10.103110 containerd[1575]: 2025-08-13 00:53:10.081 [INFO][6153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Namespace="calico-system" Pod="csi-node-driver-mmxc6" WorkloadEndpoint="172--234--199--101-k8s-csi--node--driver--mmxc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--101-k8s-csi--node--driver--mmxc6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7697ce71-aa40-4c78-acaa-c59079720a2c", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 49, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-101", ContainerID:"b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407", Pod:"csi-node-driver-mmxc6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1af29e950b4", MAC:"ee:9f:9a:9b:79:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:10.103110 containerd[1575]: 2025-08-13 00:53:10.096 [INFO][6153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" Namespace="calico-system" Pod="csi-node-driver-mmxc6" WorkloadEndpoint="172--234--199--101-k8s-csi--node--driver--mmxc6-eth0" Aug 13 00:53:10.140540 containerd[1575]: time="2025-08-13T00:53:10.140438028Z" level=info msg="connecting to shim b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407" address="unix:///run/containerd/s/66206492c8eb78f2d0724293b5245c0918e6bb1c3df8aaba6f078864f7e2b86d" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:53:10.180664 systemd[1]: Started cri-containerd-b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407.scope - libcontainer container b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407. Aug 13 00:53:10.208506 containerd[1575]: time="2025-08-13T00:53:10.208419194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mmxc6,Uid:7697ce71-aa40-4c78-acaa-c59079720a2c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407\"" Aug 13 00:53:10.212195 containerd[1575]: time="2025-08-13T00:53:10.212157344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:53:10.433470 kubelet[2714]: E0813 00:53:10.433428 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:11.183510 containerd[1575]: time="2025-08-13T00:53:11.183468912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:11.184396 containerd[1575]: time="2025-08-13T00:53:11.184373310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 00:53:11.185279 containerd[1575]: time="2025-08-13T00:53:11.185174739Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:11.186914 containerd[1575]: time="2025-08-13T00:53:11.186359993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:11.186914 containerd[1575]: time="2025-08-13T00:53:11.186819827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 974.626424ms" Aug 13 00:53:11.186914 containerd[1575]: time="2025-08-13T00:53:11.186840267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:53:11.189155 containerd[1575]: time="2025-08-13T00:53:11.189132916Z" level=info msg="CreateContainer within sandbox \"b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:53:11.199664 containerd[1575]: time="2025-08-13T00:53:11.199637226Z" level=info msg="Container 8caa0fec8c971b04fb9e060fd50c318ba9103972a7e6d2a392e38341c8c9ad20: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:53:11.201221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397650355.mount: Deactivated successfully. Aug 13 00:53:11.205438 containerd[1575]: time="2025-08-13T00:53:11.205412648Z" level=info msg="CreateContainer within sandbox \"b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8caa0fec8c971b04fb9e060fd50c318ba9103972a7e6d2a392e38341c8c9ad20\"" Aug 13 00:53:11.206192 containerd[1575]: time="2025-08-13T00:53:11.206169058Z" level=info msg="StartContainer for \"8caa0fec8c971b04fb9e060fd50c318ba9103972a7e6d2a392e38341c8c9ad20\"" Aug 13 00:53:11.207283 containerd[1575]: time="2025-08-13T00:53:11.207259863Z" level=info msg="connecting to shim 8caa0fec8c971b04fb9e060fd50c318ba9103972a7e6d2a392e38341c8c9ad20" address="unix:///run/containerd/s/66206492c8eb78f2d0724293b5245c0918e6bb1c3df8aaba6f078864f7e2b86d" protocol=ttrpc version=3 Aug 13 00:53:11.242657 systemd[1]: Started cri-containerd-8caa0fec8c971b04fb9e060fd50c318ba9103972a7e6d2a392e38341c8c9ad20.scope - libcontainer container 8caa0fec8c971b04fb9e060fd50c318ba9103972a7e6d2a392e38341c8c9ad20. Aug 13 00:53:11.283433 containerd[1575]: time="2025-08-13T00:53:11.283203657Z" level=info msg="StartContainer for \"8caa0fec8c971b04fb9e060fd50c318ba9103972a7e6d2a392e38341c8c9ad20\" returns successfully" Aug 13 00:53:11.285654 containerd[1575]: time="2025-08-13T00:53:11.285635544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:53:11.437554 kubelet[2714]: E0813 00:53:11.437448 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:11.956776 containerd[1575]: time="2025-08-13T00:53:11.956732819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,}" Aug 13 00:53:11.997767 systemd-networkd[1474]: cali1af29e950b4: Gained IPv6LL Aug 13 00:53:12.106806 systemd-networkd[1474]: cali468c67d639f: Link UP Aug 13 00:53:12.107839 systemd-networkd[1474]: cali468c67d639f: Gained carrier Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.009 [INFO][6266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0 calico-kube-controllers-6d647ccb87- calico-system 2addd270-7ed2-4caf-9455-ca6a63f6fe8b 805 0 2025-08-13 00:49:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d647ccb87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-199-101 calico-kube-controllers-6d647ccb87-wkv5s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali468c67d639f [] [] }} ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Namespace="calico-system" Pod="calico-kube-controllers-6d647ccb87-wkv5s" WorkloadEndpoint="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.009 [INFO][6266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Namespace="calico-system" Pod="calico-kube-controllers-6d647ccb87-wkv5s" WorkloadEndpoint="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.047 [INFO][6278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" HandleID="k8s-pod-network.7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Workload="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.047 [INFO][6278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" HandleID="k8s-pod-network.7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Workload="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf100), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-199-101", "pod":"calico-kube-controllers-6d647ccb87-wkv5s", "timestamp":"2025-08-13 00:53:12.047345455 +0000 UTC"}, Hostname:"172-234-199-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.047 [INFO][6278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.048 [INFO][6278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.048 [INFO][6278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-199-101' Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.056 [INFO][6278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" host="172-234-199-101" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.065 [INFO][6278] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-199-101" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.072 [INFO][6278] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.075 [INFO][6278] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.078 [INFO][6278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.078 [INFO][6278] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" host="172-234-199-101" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.082 [INFO][6278] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.089 [INFO][6278] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" host="172-234-199-101" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.094 [INFO][6278] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.131/26] block=192.168.72.128/26 handle="k8s-pod-network.7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" host="172-234-199-101" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.094 [INFO][6278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.131/26] handle="k8s-pod-network.7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" host="172-234-199-101" Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.095 [INFO][6278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:12.132859 containerd[1575]: 2025-08-13 00:53:12.095 [INFO][6278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.131/26] IPv6=[] ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" HandleID="k8s-pod-network.7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Workload="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0" Aug 13 00:53:12.134239 containerd[1575]: 2025-08-13 00:53:12.099 [INFO][6266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Namespace="calico-system" Pod="calico-kube-controllers-6d647ccb87-wkv5s" WorkloadEndpoint="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0", GenerateName:"calico-kube-controllers-6d647ccb87-", Namespace:"calico-system", SelfLink:"", UID:"2addd270-7ed2-4caf-9455-ca6a63f6fe8b", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 49, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d647ccb87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-101", ContainerID:"", Pod:"calico-kube-controllers-6d647ccb87-wkv5s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali468c67d639f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:12.134239 containerd[1575]: 2025-08-13 00:53:12.099 [INFO][6266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.131/32] ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Namespace="calico-system" Pod="calico-kube-controllers-6d647ccb87-wkv5s" WorkloadEndpoint="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0" Aug 13 00:53:12.134239 containerd[1575]: 2025-08-13 00:53:12.099 [INFO][6266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali468c67d639f ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Namespace="calico-system" Pod="calico-kube-controllers-6d647ccb87-wkv5s" WorkloadEndpoint="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0" Aug 13 00:53:12.134239 containerd[1575]: 2025-08-13 00:53:12.109 [INFO][6266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Namespace="calico-system" Pod="calico-kube-controllers-6d647ccb87-wkv5s" WorkloadEndpoint="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0" Aug 13 00:53:12.134239 containerd[1575]: 2025-08-13 00:53:12.110 [INFO][6266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Namespace="calico-system" Pod="calico-kube-controllers-6d647ccb87-wkv5s" WorkloadEndpoint="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0", GenerateName:"calico-kube-controllers-6d647ccb87-", Namespace:"calico-system", SelfLink:"", UID:"2addd270-7ed2-4caf-9455-ca6a63f6fe8b", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 49, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d647ccb87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-101", ContainerID:"7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd", Pod:"calico-kube-controllers-6d647ccb87-wkv5s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali468c67d639f", MAC:"1e:ef:57:2a:8a:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:12.134239 containerd[1575]: 2025-08-13 00:53:12.127 [INFO][6266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" Namespace="calico-system" Pod="calico-kube-controllers-6d647ccb87-wkv5s" WorkloadEndpoint="172--234--199--101-k8s-calico--kube--controllers--6d647ccb87--wkv5s-eth0" Aug 13 00:53:12.170860 containerd[1575]: time="2025-08-13T00:53:12.169893608Z" level=info msg="connecting to shim 7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd" address="unix:///run/containerd/s/d36467d867063fee42512e9389cfff75770a5c1d2bdd9cd85694de23f822e544" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:53:12.214764 systemd[1]: Started cri-containerd-7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd.scope - libcontainer container 7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd. Aug 13 00:53:12.291828 containerd[1575]: time="2025-08-13T00:53:12.291765330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d647ccb87-wkv5s,Uid:2addd270-7ed2-4caf-9455-ca6a63f6fe8b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7dab45db188da53098c624527aa95ec1aac2bd585b987ca105e83bfd4822debd\"" Aug 13 00:53:12.440556 kubelet[2714]: E0813 00:53:12.440508 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:12.744650 systemd[1]: Started sshd@32-172.234.199.101:22-147.75.109.163:50406.service - OpenSSH per-connection server daemon (147.75.109.163:50406). Aug 13 00:53:12.955670 kubelet[2714]: E0813 00:53:12.955638 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:12.956129 containerd[1575]: time="2025-08-13T00:53:12.956084987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:13.062315 systemd-networkd[1474]: cali3ff8b625400: Link UP Aug 13 00:53:13.064188 systemd-networkd[1474]: cali3ff8b625400: Gained carrier Aug 13 00:53:13.091911 sshd[6340]: Accepted publickey for core from 147.75.109.163 port 50406 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:12.989 [INFO][6342] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0 coredns-7c65d6cfc9- kube-system 635fd78e-3d10-4a30-9894-3818897e1867 804 0 2025-08-13 00:49:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-199-101 coredns-7c65d6cfc9-hxx58 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3ff8b625400 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hxx58" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:12.989 [INFO][6342] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hxx58" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.018 [INFO][6354] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" HandleID="k8s-pod-network.817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Workload="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.018 [INFO][6354] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" HandleID="k8s-pod-network.817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Workload="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-199-101", "pod":"coredns-7c65d6cfc9-hxx58", "timestamp":"2025-08-13 00:53:13.018363169 +0000 UTC"}, Hostname:"172-234-199-101", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.018 [INFO][6354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.018 [INFO][6354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.018 [INFO][6354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-199-101' Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.025 [INFO][6354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" host="172-234-199-101" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.032 [INFO][6354] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-199-101" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.036 [INFO][6354] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.038 [INFO][6354] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.040 [INFO][6354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="172-234-199-101" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.041 [INFO][6354] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" host="172-234-199-101" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.042 [INFO][6354] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.046 [INFO][6354] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" host="172-234-199-101" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.053 [INFO][6354] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.72.132/26] block=192.168.72.128/26 handle="k8s-pod-network.817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" host="172-234-199-101" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.053 [INFO][6354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.132/26] handle="k8s-pod-network.817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" host="172-234-199-101" Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.053 [INFO][6354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:13.092851 containerd[1575]: 2025-08-13 00:53:13.053 [INFO][6354] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.132/26] IPv6=[] ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" HandleID="k8s-pod-network.817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Workload="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0" Aug 13 00:53:13.094417 containerd[1575]: 2025-08-13 00:53:13.056 [INFO][6342] cni-plugin/k8s.go 418: Populated endpoint ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hxx58" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"635fd78e-3d10-4a30-9894-3818897e1867", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-101", ContainerID:"", Pod:"coredns-7c65d6cfc9-hxx58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3ff8b625400", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:13.094417 containerd[1575]: 2025-08-13 00:53:13.056 [INFO][6342] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.132/32] ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hxx58" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0" Aug 13 00:53:13.094417 containerd[1575]: 2025-08-13 00:53:13.056 [INFO][6342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ff8b625400 ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hxx58" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0" Aug 13 00:53:13.094417 containerd[1575]: 2025-08-13 00:53:13.063 [INFO][6342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hxx58" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0" Aug 13 00:53:13.094417 containerd[1575]: 2025-08-13 00:53:13.066 [INFO][6342] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hxx58" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"635fd78e-3d10-4a30-9894-3818897e1867", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-101", ContainerID:"817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe", Pod:"coredns-7c65d6cfc9-hxx58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3ff8b625400", MAC:"da:1c:92:c9:e8:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:13.094417 containerd[1575]: 2025-08-13 00:53:13.081 [INFO][6342] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hxx58" WorkloadEndpoint="172--234--199--101-k8s-coredns--7c65d6cfc9--hxx58-eth0" Aug 13 00:53:13.095862 sshd-session[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:13.110729 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:53:13.110778 systemd-logind[1528]: New session 29 of user core. Aug 13 00:53:13.152433 containerd[1575]: time="2025-08-13T00:53:13.152048381Z" level=info msg="connecting to shim 817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe" address="unix:///run/containerd/s/93a09f5b1431867b46d81e75fd006456c6b26fc250972e9f4835770c844cebce" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:53:13.190698 systemd[1]: Started cri-containerd-817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe.scope - libcontainer container 817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe. Aug 13 00:53:13.269392 containerd[1575]: time="2025-08-13T00:53:13.269345306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hxx58,Uid:635fd78e-3d10-4a30-9894-3818897e1867,Namespace:kube-system,Attempt:0,} returns sandbox id \"817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe\"" Aug 13 00:53:13.271919 kubelet[2714]: E0813 00:53:13.271885 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:13.276073 containerd[1575]: time="2025-08-13T00:53:13.276050039Z" level=info msg="CreateContainer within sandbox \"817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:53:13.298358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261922810.mount: Deactivated successfully. Aug 13 00:53:13.300764 containerd[1575]: time="2025-08-13T00:53:13.300710639Z" level=info msg="Container e1c91c63179d88005bb854c72af4d673f70934b16e3fbe79b81cbbbd7d2cd1e6: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:53:13.303451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104461822.mount: Deactivated successfully. Aug 13 00:53:13.326586 containerd[1575]: time="2025-08-13T00:53:13.324707057Z" level=info msg="CreateContainer within sandbox \"817ac78585284dae9f95cb690e5869d99050fa5d50e7ee5e58cb852b6b0a2abe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e1c91c63179d88005bb854c72af4d673f70934b16e3fbe79b81cbbbd7d2cd1e6\"" Aug 13 00:53:13.330541 containerd[1575]: time="2025-08-13T00:53:13.327218104Z" level=info msg="StartContainer for \"e1c91c63179d88005bb854c72af4d673f70934b16e3fbe79b81cbbbd7d2cd1e6\"" Aug 13 00:53:13.330541 containerd[1575]: time="2025-08-13T00:53:13.328093543Z" level=info msg="connecting to shim e1c91c63179d88005bb854c72af4d673f70934b16e3fbe79b81cbbbd7d2cd1e6" address="unix:///run/containerd/s/93a09f5b1431867b46d81e75fd006456c6b26fc250972e9f4835770c844cebce" protocol=ttrpc version=3 Aug 13 00:53:13.370792 systemd[1]: Started cri-containerd-e1c91c63179d88005bb854c72af4d673f70934b16e3fbe79b81cbbbd7d2cd1e6.scope - libcontainer container e1c91c63179d88005bb854c72af4d673f70934b16e3fbe79b81cbbbd7d2cd1e6. Aug 13 00:53:13.404710 systemd-networkd[1474]: cali468c67d639f: Gained IPv6LL Aug 13 00:53:13.441715 containerd[1575]: time="2025-08-13T00:53:13.441676466Z" level=info msg="StartContainer for \"e1c91c63179d88005bb854c72af4d673f70934b16e3fbe79b81cbbbd7d2cd1e6\" returns successfully" Aug 13 00:53:13.447753 kubelet[2714]: E0813 00:53:13.447724 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:13.452283 sshd[6371]: Connection closed by 147.75.109.163 port 50406 Aug 13 00:53:13.455366 sshd-session[6340]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:13.463375 systemd[1]: sshd@32-172.234.199.101:22-147.75.109.163:50406.service: Deactivated successfully. Aug 13 00:53:13.467405 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:53:13.470737 systemd-logind[1528]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:53:13.474164 systemd-logind[1528]: Removed session 29. Aug 13 00:53:13.489537 kubelet[2714]: I0813 00:53:13.489311 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hxx58" podStartSLOduration=210.489180839 podStartE2EDuration="3m30.489180839s" podCreationTimestamp="2025-08-13 00:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:53:13.471113654 +0000 UTC m=+215.632933773" watchObservedRunningTime="2025-08-13 00:53:13.489180839 +0000 UTC m=+215.651000968" Aug 13 00:53:13.568190 containerd[1575]: time="2025-08-13T00:53:13.568071214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:13.570103 containerd[1575]: time="2025-08-13T00:53:13.570082597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 00:53:13.570807 containerd[1575]: time="2025-08-13T00:53:13.570765089Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:13.573202 containerd[1575]: time="2025-08-13T00:53:13.573168627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:53:13.574079 containerd[1575]: time="2025-08-13T00:53:13.574046726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.288319303s" Aug 13 00:53:13.574168 containerd[1575]: time="2025-08-13T00:53:13.574130725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:53:13.575771 containerd[1575]: time="2025-08-13T00:53:13.575734244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:53:13.577827 containerd[1575]: time="2025-08-13T00:53:13.577771718Z" level=info msg="CreateContainer within sandbox \"b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:53:13.591141 containerd[1575]: time="2025-08-13T00:53:13.591067325Z" level=info msg="Container 6eb9d9c067489a89928e05e656ef1d2abc1fa06cebc82b8a7fcdfe08a39ebe95: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:53:13.599377 containerd[1575]: time="2025-08-13T00:53:13.599341127Z" level=info msg="CreateContainer within sandbox \"b0e8ca5a8c5e2ca695812a5ac115b7519f60f0cd5ef775c006f825caaaa1c407\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6eb9d9c067489a89928e05e656ef1d2abc1fa06cebc82b8a7fcdfe08a39ebe95\"" Aug 13 00:53:13.599893 containerd[1575]: time="2025-08-13T00:53:13.599863230Z" level=info msg="StartContainer for \"6eb9d9c067489a89928e05e656ef1d2abc1fa06cebc82b8a7fcdfe08a39ebe95\"" Aug 13 00:53:13.601511 containerd[1575]: time="2025-08-13T00:53:13.601459230Z" level=info msg="connecting to shim 6eb9d9c067489a89928e05e656ef1d2abc1fa06cebc82b8a7fcdfe08a39ebe95" address="unix:///run/containerd/s/66206492c8eb78f2d0724293b5245c0918e6bb1c3df8aaba6f078864f7e2b86d" protocol=ttrpc version=3 Aug 13 00:53:13.633666 systemd[1]: Started cri-containerd-6eb9d9c067489a89928e05e656ef1d2abc1fa06cebc82b8a7fcdfe08a39ebe95.scope - libcontainer container 6eb9d9c067489a89928e05e656ef1d2abc1fa06cebc82b8a7fcdfe08a39ebe95. Aug 13 00:53:13.679714 containerd[1575]: time="2025-08-13T00:53:13.679662613Z" level=info msg="StartContainer for \"6eb9d9c067489a89928e05e656ef1d2abc1fa06cebc82b8a7fcdfe08a39ebe95\" returns successfully" Aug 13 00:53:13.854507 kubelet[2714]: I0813 00:53:13.854390 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:13.854507 kubelet[2714]: I0813 00:53:13.854423 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:53:13.860936 kubelet[2714]: I0813 00:53:13.860751 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:53:13.887158 kubelet[2714]: I0813 00:53:13.887134 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:13.887267 kubelet[2714]: I0813 00:53:13.887242 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-dnlsw","kube-system/coredns-7c65d6cfc9-hxx58","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101","calico-system/csi-node-driver-mmxc6"] Aug 13 00:53:13.887331 kubelet[2714]: E0813 00:53:13.887271 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:53:13.887331 kubelet[2714]: E0813 00:53:13.887286 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:53:13.887331 kubelet[2714]: E0813 00:53:13.887296 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:53:13.887331 kubelet[2714]: E0813 00:53:13.887304 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:53:13.887331 kubelet[2714]: E0813 00:53:13.887314 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:53:13.887331 kubelet[2714]: E0813 00:53:13.887321 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:53:13.887331 kubelet[2714]: E0813 00:53:13.887329 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:53:13.887331 kubelet[2714]: E0813 00:53:13.887338 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:53:13.887509 kubelet[2714]: E0813 00:53:13.887346 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:53:13.887509 kubelet[2714]: E0813 00:53:13.887355 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:53:13.887509 kubelet[2714]: I0813 00:53:13.887363 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:53:14.146420 kubelet[2714]: I0813 00:53:14.146391 2714 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:53:14.146420 kubelet[2714]: I0813 00:53:14.146419 2714 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:53:14.289253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701250581.mount: Deactivated successfully. Aug 13 00:53:14.455609 kubelet[2714]: E0813 00:53:14.454946 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:14.497291 kubelet[2714]: I0813 00:53:14.497233 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mmxc6" podStartSLOduration=197.133118617 podStartE2EDuration="3m20.49721877s" podCreationTimestamp="2025-08-13 00:49:54 +0000 UTC" firstStartedPulling="2025-08-13 00:53:10.211065708 +0000 UTC m=+212.372885827" lastFinishedPulling="2025-08-13 00:53:13.575165861 +0000 UTC m=+215.736985980" observedRunningTime="2025-08-13 00:53:14.47766073 +0000 UTC m=+216.639480889" watchObservedRunningTime="2025-08-13 00:53:14.49721877 +0000 UTC m=+216.659038889" Aug 13 00:53:14.876732 systemd-networkd[1474]: cali3ff8b625400: Gained IPv6LL Aug 13 00:53:15.470018 kubelet[2714]: E0813 00:53:15.468818 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:15.481925 containerd[1575]: time="2025-08-13T00:53:15.481868209Z" level=error msg="failed to cleanup \"extract-51060315-c8xt sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 00:53:15.482456 containerd[1575]: time="2025-08-13T00:53:15.482427512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 00:53:15.482509 containerd[1575]: time="2025-08-13T00:53:15.482495281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=37753066" Aug 13 00:53:15.482693 kubelet[2714]: E0813 00:53:15.482661 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 00:53:15.482750 kubelet[2714]: E0813 00:53:15.482704 2714 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 00:53:15.482895 kubelet[2714]: E0813 00:53:15.482814 2714 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rfmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 00:53:15.484202 kubelet[2714]: E0813 00:53:15.484174 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:53:16.469547 kubelet[2714]: E0813 00:53:16.468854 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:16.470610 kubelet[2714]: E0813 00:53:16.470566 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:53:16.955074 kubelet[2714]: E0813 00:53:16.955041 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:18.525455 systemd[1]: Started sshd@33-172.234.199.101:22-147.75.109.163:55140.service - OpenSSH per-connection server daemon (147.75.109.163:55140). Aug 13 00:53:18.888329 sshd[6516]: Accepted publickey for core from 147.75.109.163 port 55140 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:18.890389 sshd-session[6516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:18.895349 systemd-logind[1528]: New session 30 of user core. Aug 13 00:53:18.900634 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 00:53:19.206693 sshd[6519]: Connection closed by 147.75.109.163 port 55140 Aug 13 00:53:19.208716 sshd-session[6516]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:19.213081 systemd-logind[1528]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:53:19.214023 systemd[1]: sshd@33-172.234.199.101:22-147.75.109.163:55140.service: Deactivated successfully. Aug 13 00:53:19.218138 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:53:19.223266 systemd-logind[1528]: Removed session 30. Aug 13 00:53:22.241135 systemd[1]: Started sshd@34-172.234.199.101:22-111.75.243.5:40318.service - OpenSSH per-connection server daemon (111.75.243.5:40318). Aug 13 00:53:23.908470 kubelet[2714]: I0813 00:53:23.908406 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:23.908470 kubelet[2714]: I0813 00:53:23.908447 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:53:23.912233 kubelet[2714]: I0813 00:53:23.912190 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:53:23.925410 kubelet[2714]: I0813 00:53:23.925365 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:23.925586 kubelet[2714]: I0813 00:53:23.925475 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:53:23.925586 kubelet[2714]: E0813 00:53:23.925502 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:53:23.925586 kubelet[2714]: E0813 00:53:23.925539 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:53:23.925586 kubelet[2714]: E0813 00:53:23.925550 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:53:23.925586 kubelet[2714]: E0813 00:53:23.925559 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:53:23.925586 kubelet[2714]: E0813 00:53:23.925567 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:53:23.925586 kubelet[2714]: E0813 00:53:23.925575 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:53:23.925586 kubelet[2714]: E0813 00:53:23.925583 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:53:23.925841 kubelet[2714]: E0813 00:53:23.925594 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:53:23.925841 kubelet[2714]: E0813 00:53:23.925603 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:53:23.925841 kubelet[2714]: E0813 00:53:23.925611 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:53:23.925841 kubelet[2714]: I0813 00:53:23.925619 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:53:24.269880 systemd[1]: Started sshd@35-172.234.199.101:22-147.75.109.163:55146.service - OpenSSH per-connection server daemon (147.75.109.163:55146). Aug 13 00:53:24.614792 sshd[6543]: Accepted publickey for core from 147.75.109.163 port 55146 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:24.616718 sshd-session[6543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:24.623590 systemd-logind[1528]: New session 31 of user core. Aug 13 00:53:24.626703 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 00:53:24.937778 sshd[6545]: Connection closed by 147.75.109.163 port 55146 Aug 13 00:53:24.938704 sshd-session[6543]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:24.942473 systemd-logind[1528]: Session 31 logged out. Waiting for processes to exit. Aug 13 00:53:24.943044 systemd[1]: sshd@35-172.234.199.101:22-147.75.109.163:55146.service: Deactivated successfully. Aug 13 00:53:24.945307 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 00:53:24.946914 systemd-logind[1528]: Removed session 31. Aug 13 00:53:25.386898 systemd[1]: Started sshd@36-172.234.199.101:22-103.189.235.176:49878.service - OpenSSH per-connection server daemon (103.189.235.176:49878). Aug 13 00:53:25.735809 sshd[6532]: Invalid user 111111 from 111.75.243.5 port 40318 Aug 13 00:53:26.244297 containerd[1575]: time="2025-08-13T00:53:26.244258996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900\" id:\"4b0c0ecc1c80ce7214c1c7e8fa3e89718433460deeafca1b7189e0e088104c36\" pid:6572 exited_at:{seconds:1755046406 nanos:243549174}" Aug 13 00:53:26.620658 sshd-session[6582]: pam_faillock(sshd:auth): User unknown Aug 13 00:53:26.626379 sshd[6532]: Postponed keyboard-interactive for invalid user 111111 from 111.75.243.5 port 40318 ssh2 [preauth] Aug 13 00:53:26.743581 sshd[6557]: Received disconnect from 103.189.235.176 port 49878:11: Bye Bye [preauth] Aug 13 00:53:26.743581 sshd[6557]: Disconnected from authenticating user root 103.189.235.176 port 49878 [preauth] Aug 13 00:53:26.746270 systemd[1]: sshd@36-172.234.199.101:22-103.189.235.176:49878.service: Deactivated successfully. Aug 13 00:53:27.377860 sshd-session[6582]: pam_unix(sshd:auth): check pass; user unknown Aug 13 00:53:27.377902 sshd-session[6582]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.75.243.5 Aug 13 00:53:27.378051 sshd-session[6582]: pam_faillock(sshd:auth): User unknown Aug 13 00:53:29.520480 sshd[6532]: PAM: Permission denied for illegal user 111111 from 111.75.243.5 Aug 13 00:53:29.521012 sshd[6532]: Failed keyboard-interactive/pam for invalid user 111111 from 111.75.243.5 port 40318 ssh2 Aug 13 00:53:29.995712 systemd[1]: Started sshd@37-172.234.199.101:22-147.75.109.163:55046.service - OpenSSH per-connection server daemon (147.75.109.163:55046). Aug 13 00:53:30.149368 sshd[6532]: Connection closed by invalid user 111111 111.75.243.5 port 40318 [preauth] Aug 13 00:53:30.152192 systemd[1]: sshd@34-172.234.199.101:22-111.75.243.5:40318.service: Deactivated successfully. Aug 13 00:53:30.336639 sshd[6586]: Accepted publickey for core from 147.75.109.163 port 55046 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:30.338450 sshd-session[6586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:30.344278 systemd-logind[1528]: New session 32 of user core. Aug 13 00:53:30.351664 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 00:53:30.635340 sshd[6590]: Connection closed by 147.75.109.163 port 55046 Aug 13 00:53:30.635759 sshd-session[6586]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:30.640183 systemd-logind[1528]: Session 32 logged out. Waiting for processes to exit. Aug 13 00:53:30.640968 systemd[1]: sshd@37-172.234.199.101:22-147.75.109.163:55046.service: Deactivated successfully. Aug 13 00:53:30.643140 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 00:53:30.645225 systemd-logind[1528]: Removed session 32. Aug 13 00:53:31.957319 containerd[1575]: time="2025-08-13T00:53:31.957272723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:53:32.797677 containerd[1575]: time="2025-08-13T00:53:32.797606081Z" level=error msg="failed to cleanup \"extract-639363580-8GQE sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 00:53:32.798248 containerd[1575]: time="2025-08-13T00:53:32.798184166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 00:53:32.798319 containerd[1575]: time="2025-08-13T00:53:32.798266275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=36704490" Aug 13 00:53:32.798498 kubelet[2714]: E0813 00:53:32.798452 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 00:53:32.798498 kubelet[2714]: E0813 00:53:32.798500 2714 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 00:53:32.799212 kubelet[2714]: E0813 00:53:32.798631 2714 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rfmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 00:53:32.800281 kubelet[2714]: E0813 00:53:32.800137 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:53:33.947911 kubelet[2714]: I0813 00:53:33.947687 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:33.947911 kubelet[2714]: I0813 00:53:33.947909 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:33.948349 kubelet[2714]: I0813 00:53:33.948017 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:53:33.948349 kubelet[2714]: E0813 00:53:33.948043 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:53:33.948349 kubelet[2714]: E0813 00:53:33.948055 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:53:33.948349 kubelet[2714]: E0813 00:53:33.948063 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:53:33.948349 kubelet[2714]: E0813 00:53:33.948071 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:53:33.948349 kubelet[2714]: E0813 00:53:33.948079 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:53:33.948349 kubelet[2714]: E0813 00:53:33.948087 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:53:33.948349 kubelet[2714]: E0813 00:53:33.948094 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:53:33.948349 kubelet[2714]: E0813 00:53:33.948104 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:53:33.948349 kubelet[2714]: E0813 00:53:33.948111 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:53:33.948349 kubelet[2714]: E0813 00:53:33.948119 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:53:33.948349 kubelet[2714]: I0813 00:53:33.948128 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:53:35.701990 systemd[1]: Started sshd@38-172.234.199.101:22-147.75.109.163:55056.service - OpenSSH per-connection server daemon (147.75.109.163:55056). Aug 13 00:53:36.053193 sshd[6604]: Accepted publickey for core from 147.75.109.163 port 55056 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:36.055339 sshd-session[6604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:36.060789 systemd-logind[1528]: New session 33 of user core. Aug 13 00:53:36.068009 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 00:53:36.364786 sshd[6606]: Connection closed by 147.75.109.163 port 55056 Aug 13 00:53:36.365327 sshd-session[6604]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:36.373354 systemd[1]: sshd@38-172.234.199.101:22-147.75.109.163:55056.service: Deactivated successfully. Aug 13 00:53:36.377094 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 00:53:36.379242 systemd-logind[1528]: Session 33 logged out. Waiting for processes to exit. Aug 13 00:53:36.380963 systemd-logind[1528]: Removed session 33. Aug 13 00:53:40.955958 kubelet[2714]: E0813 00:53:40.955870 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:41.427401 systemd[1]: Started sshd@39-172.234.199.101:22-147.75.109.163:37656.service - OpenSSH per-connection server daemon (147.75.109.163:37656). Aug 13 00:53:41.773627 sshd[6620]: Accepted publickey for core from 147.75.109.163 port 37656 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:41.774641 sshd-session[6620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:41.778774 systemd-logind[1528]: New session 34 of user core. Aug 13 00:53:41.782645 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 00:53:42.082883 sshd[6622]: Connection closed by 147.75.109.163 port 37656 Aug 13 00:53:42.084066 sshd-session[6620]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:42.088989 systemd-logind[1528]: Session 34 logged out. Waiting for processes to exit. Aug 13 00:53:42.089263 systemd[1]: sshd@39-172.234.199.101:22-147.75.109.163:37656.service: Deactivated successfully. Aug 13 00:53:42.091816 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 00:53:42.094255 systemd-logind[1528]: Removed session 34. Aug 13 00:53:43.982141 kubelet[2714]: I0813 00:53:43.982100 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:43.982141 kubelet[2714]: I0813 00:53:43.982137 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:53:43.984438 kubelet[2714]: I0813 00:53:43.984410 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:53:44.001231 kubelet[2714]: I0813 00:53:44.001205 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:44.001360 kubelet[2714]: I0813 00:53:44.001330 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:53:44.001432 kubelet[2714]: E0813 00:53:44.001366 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:53:44.001432 kubelet[2714]: E0813 00:53:44.001379 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:53:44.001432 kubelet[2714]: E0813 00:53:44.001388 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:53:44.001432 kubelet[2714]: E0813 00:53:44.001396 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:53:44.001432 kubelet[2714]: E0813 00:53:44.001405 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:53:44.001432 kubelet[2714]: E0813 00:53:44.001412 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:53:44.001432 kubelet[2714]: E0813 00:53:44.001419 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:53:44.001432 kubelet[2714]: E0813 00:53:44.001428 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:53:44.001432 kubelet[2714]: E0813 00:53:44.001437 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:53:44.001682 kubelet[2714]: E0813 00:53:44.001444 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:53:44.001682 kubelet[2714]: I0813 00:53:44.001453 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:53:44.956501 kubelet[2714]: E0813 00:53:44.956449 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:53:47.139784 systemd[1]: Started sshd@40-172.234.199.101:22-147.75.109.163:37666.service - OpenSSH per-connection server daemon (147.75.109.163:37666). Aug 13 00:53:47.471926 sshd[6643]: Accepted publickey for core from 147.75.109.163 port 37666 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:47.473059 sshd-session[6643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:47.477644 systemd-logind[1528]: New session 35 of user core. Aug 13 00:53:47.483644 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 00:53:47.772949 sshd[6645]: Connection closed by 147.75.109.163 port 37666 Aug 13 00:53:47.773771 sshd-session[6643]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:47.777082 systemd-logind[1528]: Session 35 logged out. Waiting for processes to exit. Aug 13 00:53:47.779762 systemd[1]: sshd@40-172.234.199.101:22-147.75.109.163:37666.service: Deactivated successfully. Aug 13 00:53:47.781996 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 00:53:47.784290 systemd-logind[1528]: Removed session 35. Aug 13 00:53:49.955924 kubelet[2714]: E0813 00:53:49.955606 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:52.839195 systemd[1]: Started sshd@41-172.234.199.101:22-147.75.109.163:41694.service - OpenSSH per-connection server daemon (147.75.109.163:41694). Aug 13 00:53:52.955782 kubelet[2714]: E0813 00:53:52.955751 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:53.181582 sshd[6658]: Accepted publickey for core from 147.75.109.163 port 41694 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:53.183442 sshd-session[6658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:53.190181 systemd-logind[1528]: New session 36 of user core. Aug 13 00:53:53.197019 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 00:53:53.500379 sshd[6662]: Connection closed by 147.75.109.163 port 41694 Aug 13 00:53:53.501863 sshd-session[6658]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:53.507614 systemd[1]: sshd@41-172.234.199.101:22-147.75.109.163:41694.service: Deactivated successfully. Aug 13 00:53:53.510887 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 00:53:53.513186 systemd-logind[1528]: Session 36 logged out. Waiting for processes to exit. Aug 13 00:53:53.515138 systemd-logind[1528]: Removed session 36. Aug 13 00:53:54.024738 kubelet[2714]: I0813 00:53:54.024693 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:54.024738 kubelet[2714]: I0813 00:53:54.024732 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:53:54.027645 kubelet[2714]: I0813 00:53:54.027617 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:53:54.043257 kubelet[2714]: I0813 00:53:54.043240 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:53:54.043663 kubelet[2714]: I0813 00:53:54.043501 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-dnlsw","kube-system/coredns-7c65d6cfc9-hxx58","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:53:54.043663 kubelet[2714]: E0813 00:53:54.043546 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:53:54.043663 kubelet[2714]: E0813 00:53:54.043560 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:53:54.043663 kubelet[2714]: E0813 00:53:54.043569 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:53:54.043663 kubelet[2714]: E0813 00:53:54.043577 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:53:54.043663 kubelet[2714]: E0813 00:53:54.043586 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:53:54.043663 kubelet[2714]: E0813 00:53:54.043593 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:53:54.043663 kubelet[2714]: E0813 00:53:54.043601 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:53:54.043663 kubelet[2714]: E0813 00:53:54.043611 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:53:54.043663 kubelet[2714]: E0813 00:53:54.043619 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:53:54.043663 kubelet[2714]: E0813 00:53:54.043626 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:53:54.043663 kubelet[2714]: I0813 00:53:54.043635 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:53:55.956543 kubelet[2714]: E0813 00:53:55.955887 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:53:55.958622 containerd[1575]: time="2025-08-13T00:53:55.958356702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:53:56.242678 containerd[1575]: time="2025-08-13T00:53:56.242564808Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900\" id:\"b7b2eda0072f6e747931644c587add4b65cac07efda36218d2fe50152fd955c8\" pid:6685 exited_at:{seconds:1755046436 nanos:242145880}" Aug 13 00:53:56.942304 containerd[1575]: time="2025-08-13T00:53:56.942241948Z" level=error msg="failed to cleanup \"extract-762094769-RTCF sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 00:53:56.943502 containerd[1575]: time="2025-08-13T00:53:56.942897413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 00:53:56.943564 containerd[1575]: time="2025-08-13T00:53:56.943541508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=36704490" Aug 13 00:53:56.943817 kubelet[2714]: E0813 00:53:56.943761 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 00:53:56.943984 kubelet[2714]: E0813 00:53:56.943916 2714 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 00:53:56.944852 kubelet[2714]: E0813 00:53:56.944719 2714 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rfmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 00:53:56.946414 kubelet[2714]: E0813 00:53:56.946368 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:53:58.561330 systemd[1]: Started sshd@42-172.234.199.101:22-147.75.109.163:56620.service - OpenSSH per-connection server daemon (147.75.109.163:56620). Aug 13 00:53:58.903316 sshd[6698]: Accepted publickey for core from 147.75.109.163 port 56620 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:58.904686 sshd-session[6698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:58.910649 systemd-logind[1528]: New session 37 of user core. Aug 13 00:53:58.914640 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 00:53:59.209893 sshd[6700]: Connection closed by 147.75.109.163 port 56620 Aug 13 00:53:59.208598 sshd-session[6698]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:59.215819 systemd[1]: sshd@42-172.234.199.101:22-147.75.109.163:56620.service: Deactivated successfully. Aug 13 00:53:59.220024 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 00:53:59.221679 systemd-logind[1528]: Session 37 logged out. Waiting for processes to exit. Aug 13 00:53:59.224936 systemd-logind[1528]: Removed session 37. Aug 13 00:54:04.067380 kubelet[2714]: I0813 00:54:04.067325 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:04.067380 kubelet[2714]: I0813 00:54:04.067370 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:54:04.073253 kubelet[2714]: I0813 00:54:04.072859 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:54:04.094670 kubelet[2714]: I0813 00:54:04.094648 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:04.094784 kubelet[2714]: I0813 00:54:04.094767 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:54:04.094861 kubelet[2714]: E0813 00:54:04.094791 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:54:04.094861 kubelet[2714]: E0813 00:54:04.094805 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:54:04.094861 kubelet[2714]: E0813 00:54:04.094814 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:54:04.094861 kubelet[2714]: E0813 00:54:04.094823 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:54:04.094861 kubelet[2714]: E0813 00:54:04.094831 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:54:04.094861 kubelet[2714]: E0813 00:54:04.094839 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:54:04.094861 kubelet[2714]: E0813 00:54:04.094848 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:54:04.094861 kubelet[2714]: E0813 00:54:04.094857 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:54:04.094861 kubelet[2714]: E0813 00:54:04.094866 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:54:04.095066 kubelet[2714]: E0813 00:54:04.094873 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:54:04.095066 kubelet[2714]: I0813 00:54:04.094883 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:54:04.272843 systemd[1]: Started sshd@43-172.234.199.101:22-147.75.109.163:56628.service - OpenSSH per-connection server daemon (147.75.109.163:56628). Aug 13 00:54:04.615488 sshd[6712]: Accepted publickey for core from 147.75.109.163 port 56628 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:54:04.616958 sshd-session[6712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:54:04.623415 systemd-logind[1528]: New session 38 of user core. Aug 13 00:54:04.632657 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 00:54:04.933588 sshd[6714]: Connection closed by 147.75.109.163 port 56628 Aug 13 00:54:04.934357 sshd-session[6712]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:04.939450 systemd[1]: sshd@43-172.234.199.101:22-147.75.109.163:56628.service: Deactivated successfully. Aug 13 00:54:04.942161 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 00:54:04.943022 systemd-logind[1528]: Session 38 logged out. Waiting for processes to exit. Aug 13 00:54:04.945314 systemd-logind[1528]: Removed session 38. Aug 13 00:54:08.955736 kubelet[2714]: E0813 00:54:08.955679 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:54:09.993175 systemd[1]: Started sshd@44-172.234.199.101:22-147.75.109.163:36614.service - OpenSSH per-connection server daemon (147.75.109.163:36614). Aug 13 00:54:10.331352 sshd[6727]: Accepted publickey for core from 147.75.109.163 port 36614 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:54:10.332603 sshd-session[6727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:54:10.337265 systemd-logind[1528]: New session 39 of user core. Aug 13 00:54:10.342932 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 00:54:10.646469 sshd[6730]: Connection closed by 147.75.109.163 port 36614 Aug 13 00:54:10.647130 sshd-session[6727]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:10.651183 systemd[1]: sshd@44-172.234.199.101:22-147.75.109.163:36614.service: Deactivated successfully. Aug 13 00:54:10.653954 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 00:54:10.657365 systemd-logind[1528]: Session 39 logged out. Waiting for processes to exit. Aug 13 00:54:10.659886 systemd-logind[1528]: Removed session 39. Aug 13 00:54:14.115423 kubelet[2714]: I0813 00:54:14.115388 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:14.116213 kubelet[2714]: I0813 00:54:14.115804 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:14.116213 kubelet[2714]: I0813 00:54:14.116085 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:54:14.116213 kubelet[2714]: E0813 00:54:14.116112 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:54:14.116213 kubelet[2714]: E0813 00:54:14.116126 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:54:14.116213 kubelet[2714]: E0813 00:54:14.116135 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:54:14.116213 kubelet[2714]: E0813 00:54:14.116144 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:54:14.116213 kubelet[2714]: E0813 00:54:14.116151 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:54:14.116213 kubelet[2714]: E0813 00:54:14.116161 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:54:14.116213 kubelet[2714]: E0813 00:54:14.116169 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:54:14.116213 kubelet[2714]: E0813 00:54:14.116179 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:54:14.116213 kubelet[2714]: E0813 00:54:14.116187 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:54:14.116213 kubelet[2714]: E0813 00:54:14.116194 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:54:14.116213 kubelet[2714]: I0813 00:54:14.116202 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:54:15.710701 systemd[1]: Started sshd@45-172.234.199.101:22-147.75.109.163:36628.service - OpenSSH per-connection server daemon (147.75.109.163:36628). Aug 13 00:54:16.050592 sshd[6744]: Accepted publickey for core from 147.75.109.163 port 36628 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:54:16.051112 sshd-session[6744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:54:16.056557 systemd-logind[1528]: New session 40 of user core. Aug 13 00:54:16.061708 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 00:54:16.360147 sshd[6746]: Connection closed by 147.75.109.163 port 36628 Aug 13 00:54:16.361846 sshd-session[6744]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:16.367673 systemd-logind[1528]: Session 40 logged out. Waiting for processes to exit. Aug 13 00:54:16.367846 systemd[1]: sshd@45-172.234.199.101:22-147.75.109.163:36628.service: Deactivated successfully. Aug 13 00:54:16.372428 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 00:54:16.375649 systemd-logind[1528]: Removed session 40. Aug 13 00:54:21.424594 systemd[1]: Started sshd@46-172.234.199.101:22-147.75.109.163:48032.service - OpenSSH per-connection server daemon (147.75.109.163:48032). Aug 13 00:54:21.767311 sshd[6758]: Accepted publickey for core from 147.75.109.163 port 48032 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:54:21.768806 sshd-session[6758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:54:21.774275 systemd-logind[1528]: New session 41 of user core. Aug 13 00:54:21.781694 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 00:54:22.075641 sshd[6760]: Connection closed by 147.75.109.163 port 48032 Aug 13 00:54:22.077320 sshd-session[6758]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:22.082224 systemd[1]: sshd@46-172.234.199.101:22-147.75.109.163:48032.service: Deactivated successfully. Aug 13 00:54:22.084945 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 00:54:22.086218 systemd-logind[1528]: Session 41 logged out. Waiting for processes to exit. Aug 13 00:54:22.088799 systemd-logind[1528]: Removed session 41. Aug 13 00:54:22.955864 kubelet[2714]: E0813 00:54:22.955793 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:54:23.957115 kubelet[2714]: E0813 00:54:23.956911 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:54:24.137627 kubelet[2714]: I0813 00:54:24.137597 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:24.137627 kubelet[2714]: I0813 00:54:24.137635 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:24.137788 kubelet[2714]: I0813 00:54:24.137760 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:54:24.137788 kubelet[2714]: E0813 00:54:24.137786 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:54:24.137864 kubelet[2714]: E0813 00:54:24.137798 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:54:24.137864 kubelet[2714]: E0813 00:54:24.137806 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:54:24.137864 kubelet[2714]: E0813 00:54:24.137814 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:54:24.137864 kubelet[2714]: E0813 00:54:24.137821 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:54:24.137864 kubelet[2714]: E0813 00:54:24.137828 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:54:24.137864 kubelet[2714]: E0813 00:54:24.137836 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:54:24.137864 kubelet[2714]: E0813 00:54:24.137845 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:54:24.137864 kubelet[2714]: E0813 00:54:24.137853 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:54:24.137864 kubelet[2714]: E0813 00:54:24.137860 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:54:24.137864 kubelet[2714]: I0813 00:54:24.137868 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:54:26.235067 containerd[1575]: time="2025-08-13T00:54:26.235029064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900\" id:\"3fd048c4277322c2d022618e02044c5bf5fa824dbf8fd19c35062d6395190a9e\" pid:6790 exited_at:{seconds:1755046466 nanos:234714726}" Aug 13 00:54:27.139023 systemd[1]: Started sshd@47-172.234.199.101:22-147.75.109.163:48040.service - OpenSSH per-connection server daemon (147.75.109.163:48040). Aug 13 00:54:27.475850 sshd[6803]: Accepted publickey for core from 147.75.109.163 port 48040 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:54:27.477976 sshd-session[6803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:54:27.483646 systemd-logind[1528]: New session 42 of user core. Aug 13 00:54:27.490669 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 00:54:27.807356 sshd[6805]: Connection closed by 147.75.109.163 port 48040 Aug 13 00:54:27.808061 sshd-session[6803]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:27.814204 systemd[1]: sshd@47-172.234.199.101:22-147.75.109.163:48040.service: Deactivated successfully. Aug 13 00:54:27.818668 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 00:54:27.819856 systemd-logind[1528]: Session 42 logged out. Waiting for processes to exit. Aug 13 00:54:27.821666 systemd-logind[1528]: Removed session 42. Aug 13 00:54:31.956305 kubelet[2714]: E0813 00:54:31.956116 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:54:32.870292 systemd[1]: Started sshd@48-172.234.199.101:22-147.75.109.163:59450.service - OpenSSH per-connection server daemon (147.75.109.163:59450). Aug 13 00:54:32.955488 kubelet[2714]: E0813 00:54:32.955455 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:54:33.211394 sshd[6817]: Accepted publickey for core from 147.75.109.163 port 59450 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:54:33.212563 sshd-session[6817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:54:33.218702 systemd-logind[1528]: New session 43 of user core. Aug 13 00:54:33.222691 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 00:54:33.536329 containerd[1575]: time="2025-08-13T00:54:33.535963181Z" level=warning msg="container event discarded" container=97e50ad64ff17d377df98cd1f00fc1182ded8195625f9de55ea2eb3ac3c5f894 type=CONTAINER_CREATED_EVENT Aug 13 00:54:33.536970 sshd[6821]: Connection closed by 147.75.109.163 port 59450 Aug 13 00:54:33.537718 sshd-session[6817]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:33.541829 systemd[1]: sshd@48-172.234.199.101:22-147.75.109.163:59450.service: Deactivated successfully. Aug 13 00:54:33.544359 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 00:54:33.545491 systemd-logind[1528]: Session 43 logged out. Waiting for processes to exit. Aug 13 00:54:33.546799 containerd[1575]: time="2025-08-13T00:54:33.546760695Z" level=warning msg="container event discarded" container=97e50ad64ff17d377df98cd1f00fc1182ded8195625f9de55ea2eb3ac3c5f894 type=CONTAINER_STARTED_EVENT Aug 13 00:54:33.547602 systemd-logind[1528]: Removed session 43. Aug 13 00:54:33.578028 containerd[1575]: time="2025-08-13T00:54:33.577977821Z" level=warning msg="container event discarded" container=3c9c023f8fd8aeeb8dd33321cdccf1785fa53209f5eba4ffaebcbfae9edfa506 type=CONTAINER_CREATED_EVENT Aug 13 00:54:33.578028 containerd[1575]: time="2025-08-13T00:54:33.578012641Z" level=warning msg="container event discarded" container=3c9c023f8fd8aeeb8dd33321cdccf1785fa53209f5eba4ffaebcbfae9edfa506 type=CONTAINER_STARTED_EVENT Aug 13 00:54:33.591367 containerd[1575]: time="2025-08-13T00:54:33.591335275Z" level=warning msg="container event discarded" container=f0476a43f051868712718256864bfb53c1077e3d70921e97a1d38f4377ff153d type=CONTAINER_CREATED_EVENT Aug 13 00:54:33.591367 containerd[1575]: time="2025-08-13T00:54:33.591361404Z" level=warning msg="container event discarded" container=5aaf0e3637cdc766a6698733a3fd1c847fd3cfe748d0b68c43a9cfa1f5885847 type=CONTAINER_CREATED_EVENT Aug 13 00:54:33.591367 containerd[1575]: time="2025-08-13T00:54:33.591369804Z" level=warning msg="container event discarded" container=5aaf0e3637cdc766a6698733a3fd1c847fd3cfe748d0b68c43a9cfa1f5885847 type=CONTAINER_STARTED_EVENT Aug 13 00:54:33.604735 containerd[1575]: time="2025-08-13T00:54:33.604706527Z" level=warning msg="container event discarded" container=d6f409c72b341f5bb366bd4eea96e6cb0538581aad0078802ecea208b8920d7a type=CONTAINER_CREATED_EVENT Aug 13 00:54:33.620792 containerd[1575]: time="2025-08-13T00:54:33.620498010Z" level=warning msg="container event discarded" container=5d74865c157f6b115e89f47b51d95e1547fabcb47a9a13740e87b83286a42588 type=CONTAINER_CREATED_EVENT Aug 13 00:54:33.740847 containerd[1575]: time="2025-08-13T00:54:33.740781667Z" level=warning msg="container event discarded" container=f0476a43f051868712718256864bfb53c1077e3d70921e97a1d38f4377ff153d type=CONTAINER_STARTED_EVENT Aug 13 00:54:33.740847 containerd[1575]: time="2025-08-13T00:54:33.740821367Z" level=warning msg="container event discarded" container=d6f409c72b341f5bb366bd4eea96e6cb0538581aad0078802ecea208b8920d7a type=CONTAINER_STARTED_EVENT Aug 13 00:54:33.767014 containerd[1575]: time="2025-08-13T00:54:33.766967495Z" level=warning msg="container event discarded" container=5d74865c157f6b115e89f47b51d95e1547fabcb47a9a13740e87b83286a42588 type=CONTAINER_STARTED_EVENT Aug 13 00:54:34.166542 kubelet[2714]: I0813 00:54:34.166443 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:34.166542 kubelet[2714]: I0813 00:54:34.166472 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:54:34.170338 kubelet[2714]: I0813 00:54:34.170323 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:54:34.201369 kubelet[2714]: I0813 00:54:34.201092 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:34.201369 kubelet[2714]: I0813 00:54:34.201235 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:54:34.201369 kubelet[2714]: E0813 00:54:34.201264 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:54:34.201369 kubelet[2714]: E0813 00:54:34.201277 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:54:34.201369 kubelet[2714]: E0813 00:54:34.201286 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:54:34.201369 kubelet[2714]: E0813 00:54:34.201295 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:54:34.201369 kubelet[2714]: E0813 00:54:34.201304 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:54:34.201369 kubelet[2714]: E0813 00:54:34.201312 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:54:34.201369 kubelet[2714]: E0813 00:54:34.201319 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:54:34.201369 kubelet[2714]: E0813 00:54:34.201329 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:54:34.201369 kubelet[2714]: E0813 00:54:34.201336 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:54:34.201369 kubelet[2714]: E0813 00:54:34.201344 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:54:34.201369 kubelet[2714]: I0813 00:54:34.201353 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:54:37.958875 containerd[1575]: time="2025-08-13T00:54:37.958780921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:54:37.960757 kubelet[2714]: I0813 00:54:37.960636 2714 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=100 highThreshold=85 amountToFree=411531673 lowThreshold=80 Aug 13 00:54:37.960757 kubelet[2714]: E0813 00:54:37.960729 2714 kubelet.go:1474] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 411531673 bytes, but only found 0 bytes eligible to free." Aug 13 00:54:38.595729 systemd[1]: Started sshd@49-172.234.199.101:22-147.75.109.163:55548.service - OpenSSH per-connection server daemon (147.75.109.163:55548). Aug 13 00:54:38.660289 containerd[1575]: time="2025-08-13T00:54:38.660231957Z" level=error msg="failed to cleanup \"extract-499953110-EUCv sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 00:54:38.661242 containerd[1575]: time="2025-08-13T00:54:38.661184313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 00:54:38.661426 containerd[1575]: time="2025-08-13T00:54:38.661360853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=36704490" Aug 13 00:54:38.661829 kubelet[2714]: E0813 00:54:38.661484 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 00:54:38.661829 kubelet[2714]: E0813 00:54:38.661544 2714 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 00:54:38.661829 kubelet[2714]: E0813 00:54:38.661640 2714 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rfmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d647ccb87-wkv5s_calico-system(2addd270-7ed2-4caf-9455-ca6a63f6fe8b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 00:54:38.663106 kubelet[2714]: E0813 00:54:38.663082 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:54:38.939865 sshd[6836]: Accepted publickey for core from 147.75.109.163 port 55548 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:54:38.941481 sshd-session[6836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:54:38.947761 systemd-logind[1528]: New session 44 of user core. Aug 13 00:54:38.951648 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 00:54:39.240549 sshd[6838]: Connection closed by 147.75.109.163 port 55548 Aug 13 00:54:39.241457 sshd-session[6836]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:39.246177 systemd-logind[1528]: Session 44 logged out. Waiting for processes to exit. Aug 13 00:54:39.247089 systemd[1]: sshd@49-172.234.199.101:22-147.75.109.163:55548.service: Deactivated successfully. Aug 13 00:54:39.249159 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 00:54:39.251956 systemd-logind[1528]: Removed session 44. Aug 13 00:54:43.555511 containerd[1575]: time="2025-08-13T00:54:43.555422765Z" level=warning msg="container event discarded" container=e7cb45fd3fde5672419dd127e9891e214fde83f11acb50a29e9d31bbd18989e1 type=CONTAINER_CREATED_EVENT Aug 13 00:54:43.555511 containerd[1575]: time="2025-08-13T00:54:43.555480384Z" level=warning msg="container event discarded" container=e7cb45fd3fde5672419dd127e9891e214fde83f11acb50a29e9d31bbd18989e1 type=CONTAINER_STARTED_EVENT Aug 13 00:54:43.573387 containerd[1575]: time="2025-08-13T00:54:43.573336487Z" level=warning msg="container event discarded" container=8f978433ae75ff6bb09ce178bdbedcaff0bde8e9a161a1d1b071f9b6659e786e type=CONTAINER_CREATED_EVENT Aug 13 00:54:43.655682 containerd[1575]: time="2025-08-13T00:54:43.655616497Z" level=warning msg="container event discarded" container=8f978433ae75ff6bb09ce178bdbedcaff0bde8e9a161a1d1b071f9b6659e786e type=CONTAINER_STARTED_EVENT Aug 13 00:54:43.702921 containerd[1575]: time="2025-08-13T00:54:43.702882239Z" level=warning msg="container event discarded" container=66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3 type=CONTAINER_CREATED_EVENT Aug 13 00:54:43.702921 containerd[1575]: time="2025-08-13T00:54:43.702915129Z" level=warning msg="container event discarded" container=66d7027e0c86c0dbacf4cf20a21bfeb1ddf6804c63321f8aa58ae81bc308e8f3 type=CONTAINER_STARTED_EVENT Aug 13 00:54:43.987808 systemd[1]: Started sshd@50-172.234.199.101:22-103.189.235.176:34160.service - OpenSSH per-connection server daemon (103.189.235.176:34160). Aug 13 00:54:44.226879 kubelet[2714]: I0813 00:54:44.226846 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:44.226879 kubelet[2714]: I0813 00:54:44.226882 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:54:44.228403 kubelet[2714]: I0813 00:54:44.228375 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:54:44.242548 kubelet[2714]: I0813 00:54:44.241795 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:44.242548 kubelet[2714]: I0813 00:54:44.242389 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:54:44.242548 kubelet[2714]: E0813 00:54:44.242417 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:54:44.242548 kubelet[2714]: E0813 00:54:44.242431 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:54:44.242548 kubelet[2714]: E0813 00:54:44.242439 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:54:44.242548 kubelet[2714]: E0813 00:54:44.242447 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:54:44.242548 kubelet[2714]: E0813 00:54:44.242455 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:54:44.242548 kubelet[2714]: E0813 00:54:44.242463 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:54:44.242548 kubelet[2714]: E0813 00:54:44.242470 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:54:44.242548 kubelet[2714]: E0813 00:54:44.242481 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:54:44.242548 kubelet[2714]: E0813 00:54:44.242488 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:54:44.242548 kubelet[2714]: E0813 00:54:44.242495 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:54:44.242548 kubelet[2714]: I0813 00:54:44.242504 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:54:44.299423 systemd[1]: Started sshd@51-172.234.199.101:22-147.75.109.163:55564.service - OpenSSH per-connection server daemon (147.75.109.163:55564). Aug 13 00:54:44.637149 sshd[6869]: Accepted publickey for core from 147.75.109.163 port 55564 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:54:44.638772 sshd-session[6869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:54:44.643702 systemd-logind[1528]: New session 45 of user core. Aug 13 00:54:44.651718 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 00:54:44.933631 sshd[6871]: Connection closed by 147.75.109.163 port 55564 Aug 13 00:54:44.934749 sshd-session[6869]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:44.939731 systemd[1]: sshd@51-172.234.199.101:22-147.75.109.163:55564.service: Deactivated successfully. Aug 13 00:54:44.942845 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 00:54:44.943694 systemd-logind[1528]: Session 45 logged out. Waiting for processes to exit. Aug 13 00:54:44.945445 systemd-logind[1528]: Removed session 45. Aug 13 00:54:45.298860 containerd[1575]: time="2025-08-13T00:54:45.298712533Z" level=warning msg="container event discarded" container=c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e type=CONTAINER_CREATED_EVENT Aug 13 00:54:45.362038 containerd[1575]: time="2025-08-13T00:54:45.361985441Z" level=warning msg="container event discarded" container=c8d79fcf0ea792692eac0766f30fc3de884b10920f99e0b37ac0d8a73eb45c4e type=CONTAINER_STARTED_EVENT Aug 13 00:54:45.590662 sshd[6866]: Received disconnect from 103.189.235.176 port 34160:11: Bye Bye [preauth] Aug 13 00:54:45.590662 sshd[6866]: Disconnected from authenticating user root 103.189.235.176 port 34160 [preauth] Aug 13 00:54:45.593493 systemd[1]: sshd@50-172.234.199.101:22-103.189.235.176:34160.service: Deactivated successfully. Aug 13 00:54:49.957560 kubelet[2714]: E0813 00:54:49.955757 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:54:50.000304 systemd[1]: Started sshd@52-172.234.199.101:22-147.75.109.163:49938.service - OpenSSH per-connection server daemon (147.75.109.163:49938). Aug 13 00:54:50.346986 sshd[6885]: Accepted publickey for core from 147.75.109.163 port 49938 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:54:50.348268 sshd-session[6885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:54:50.353336 systemd-logind[1528]: New session 46 of user core. Aug 13 00:54:50.360739 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 00:54:50.650207 sshd[6887]: Connection closed by 147.75.109.163 port 49938 Aug 13 00:54:50.650801 sshd-session[6885]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:50.655212 systemd[1]: sshd@52-172.234.199.101:22-147.75.109.163:49938.service: Deactivated successfully. Aug 13 00:54:50.657698 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 00:54:50.659254 systemd-logind[1528]: Session 46 logged out. Waiting for processes to exit. Aug 13 00:54:50.661777 systemd-logind[1528]: Removed session 46. Aug 13 00:54:50.956350 kubelet[2714]: E0813 00:54:50.956159 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:54:51.955504 kubelet[2714]: E0813 00:54:51.955180 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:54:54.042538 containerd[1575]: time="2025-08-13T00:54:54.042463425Z" level=warning msg="container event discarded" container=f4dc011f92297f78e36f94b7dbb7843b8ccbeba56b97dae72518dd4bf406ab01 type=CONTAINER_CREATED_EVENT Aug 13 00:54:54.042538 containerd[1575]: time="2025-08-13T00:54:54.042500115Z" level=warning msg="container event discarded" container=f4dc011f92297f78e36f94b7dbb7843b8ccbeba56b97dae72518dd4bf406ab01 type=CONTAINER_STARTED_EVENT Aug 13 00:54:54.261058 kubelet[2714]: I0813 00:54:54.261027 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:54.261058 kubelet[2714]: I0813 00:54:54.261060 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:54:54.262623 kubelet[2714]: I0813 00:54:54.262593 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:54:54.280600 kubelet[2714]: I0813 00:54:54.280510 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:54:54.280858 kubelet[2714]: I0813 00:54:54.280833 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-dnlsw","kube-system/coredns-7c65d6cfc9-hxx58","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:54:54.280935 kubelet[2714]: E0813 00:54:54.280866 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:54:54.280935 kubelet[2714]: E0813 00:54:54.280879 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:54:54.280935 kubelet[2714]: E0813 00:54:54.280888 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:54:54.280935 kubelet[2714]: E0813 00:54:54.280896 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:54:54.280935 kubelet[2714]: E0813 00:54:54.280905 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:54:54.280935 kubelet[2714]: E0813 00:54:54.280913 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:54:54.280935 kubelet[2714]: E0813 00:54:54.280921 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:54:54.280935 kubelet[2714]: E0813 00:54:54.280931 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:54:54.280935 kubelet[2714]: E0813 00:54:54.280939 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:54:54.281150 kubelet[2714]: E0813 00:54:54.280947 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:54:54.281150 kubelet[2714]: I0813 00:54:54.280956 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:54:54.305979 containerd[1575]: time="2025-08-13T00:54:54.305893057Z" level=warning msg="container event discarded" container=1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e type=CONTAINER_CREATED_EVENT Aug 13 00:54:54.305979 containerd[1575]: time="2025-08-13T00:54:54.305924497Z" level=warning msg="container event discarded" container=1c8288186968964e8781e7fefb015ce476b20e510bc2c980e05bbd8a3e06754e type=CONTAINER_STARTED_EVENT Aug 13 00:54:55.610659 containerd[1575]: time="2025-08-13T00:54:55.610595480Z" level=warning msg="container event discarded" container=64f5ed96cd6435d0387a9156d45c9fa77f4173f26fd1edad24a57ea2cc5a8028 type=CONTAINER_CREATED_EVENT Aug 13 00:54:55.704906 containerd[1575]: time="2025-08-13T00:54:55.704842963Z" level=warning msg="container event discarded" container=64f5ed96cd6435d0387a9156d45c9fa77f4173f26fd1edad24a57ea2cc5a8028 type=CONTAINER_STARTED_EVENT Aug 13 00:54:55.710842 systemd[1]: Started sshd@53-172.234.199.101:22-147.75.109.163:49942.service - OpenSSH per-connection server daemon (147.75.109.163:49942). Aug 13 00:54:56.043879 sshd[6900]: Accepted publickey for core from 147.75.109.163 port 49942 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:54:56.045462 sshd-session[6900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:54:56.051460 systemd-logind[1528]: New session 47 of user core. Aug 13 00:54:56.059856 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 00:54:56.267346 containerd[1575]: time="2025-08-13T00:54:56.267307011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900\" id:\"861fe38b6956e28f83a45f9a873d40ce40726c08b0fbdc1f138f8a7d3a746af8\" pid:6915 exited_at:{seconds:1755046496 nanos:266867583}" Aug 13 00:54:56.348173 sshd[6902]: Connection closed by 147.75.109.163 port 49942 Aug 13 00:54:56.348948 sshd-session[6900]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:56.353005 systemd-logind[1528]: Session 47 logged out. Waiting for processes to exit. Aug 13 00:54:56.353797 systemd[1]: sshd@53-172.234.199.101:22-147.75.109.163:49942.service: Deactivated successfully. Aug 13 00:54:56.355859 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 00:54:56.357899 systemd-logind[1528]: Removed session 47. Aug 13 00:54:56.955777 kubelet[2714]: E0813 00:54:56.955744 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:54:57.358637 containerd[1575]: time="2025-08-13T00:54:57.358479772Z" level=warning msg="container event discarded" container=81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92 type=CONTAINER_CREATED_EVENT Aug 13 00:54:57.434421 containerd[1575]: time="2025-08-13T00:54:57.434375411Z" level=warning msg="container event discarded" container=81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92 type=CONTAINER_STARTED_EVENT Aug 13 00:54:57.545827 containerd[1575]: time="2025-08-13T00:54:57.545783917Z" level=warning msg="container event discarded" container=81db158723e1439f84f78b06c03b5d037002360d598be04ba41d8113a14dbc92 type=CONTAINER_STOPPED_EVENT Aug 13 00:55:01.124474 containerd[1575]: time="2025-08-13T00:55:01.124209169Z" level=warning msg="container event discarded" container=0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6 type=CONTAINER_CREATED_EVENT Aug 13 00:55:01.191718 containerd[1575]: time="2025-08-13T00:55:01.191665855Z" level=warning msg="container event discarded" container=0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6 type=CONTAINER_STARTED_EVENT Aug 13 00:55:01.414178 systemd[1]: Started sshd@54-172.234.199.101:22-147.75.109.163:59054.service - OpenSSH per-connection server daemon (147.75.109.163:59054). Aug 13 00:55:01.726364 containerd[1575]: time="2025-08-13T00:55:01.726239811Z" level=warning msg="container event discarded" container=0b2931c49111f5dedab916809e17593978c3ad8a783c833a248fcb8991e634f6 type=CONTAINER_STOPPED_EVENT Aug 13 00:55:01.763718 sshd[6936]: Accepted publickey for core from 147.75.109.163 port 59054 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:55:01.764863 sshd-session[6936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:55:01.771501 systemd-logind[1528]: New session 48 of user core. Aug 13 00:55:01.776642 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 00:55:02.077630 sshd[6938]: Connection closed by 147.75.109.163 port 59054 Aug 13 00:55:02.078222 sshd-session[6936]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:02.082921 systemd[1]: sshd@54-172.234.199.101:22-147.75.109.163:59054.service: Deactivated successfully. Aug 13 00:55:02.086915 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 00:55:02.090901 systemd-logind[1528]: Session 48 logged out. Waiting for processes to exit. Aug 13 00:55:02.094841 systemd-logind[1528]: Removed session 48. Aug 13 00:55:02.955760 kubelet[2714]: E0813 00:55:02.955712 2714 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 00:55:03.957032 kubelet[2714]: E0813 00:55:03.956542 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:55:04.302533 kubelet[2714]: I0813 00:55:04.302153 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:55:04.302533 kubelet[2714]: I0813 00:55:04.302430 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:55:04.304629 kubelet[2714]: I0813 00:55:04.304420 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:55:04.327259 kubelet[2714]: I0813 00:55:04.327238 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:55:04.327546 kubelet[2714]: I0813 00:55:04.327533 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:55:04.327637 kubelet[2714]: E0813 00:55:04.327562 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:55:04.327637 kubelet[2714]: E0813 00:55:04.327575 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:55:04.327637 kubelet[2714]: E0813 00:55:04.327584 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:55:04.327637 kubelet[2714]: E0813 00:55:04.327592 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:55:04.327637 kubelet[2714]: E0813 00:55:04.327601 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:55:04.327637 kubelet[2714]: E0813 00:55:04.327609 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:55:04.327637 kubelet[2714]: E0813 00:55:04.327619 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:55:04.327637 kubelet[2714]: E0813 00:55:04.327629 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:55:04.327637 kubelet[2714]: E0813 00:55:04.327636 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:55:04.327637 kubelet[2714]: E0813 00:55:04.327643 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:55:04.327845 kubelet[2714]: I0813 00:55:04.327652 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:55:07.131969 systemd[1]: Started sshd@55-172.234.199.101:22-147.75.109.163:59058.service - OpenSSH per-connection server daemon (147.75.109.163:59058). Aug 13 00:55:07.470550 sshd[6950]: Accepted publickey for core from 147.75.109.163 port 59058 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:55:07.471903 sshd-session[6950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:55:07.478378 systemd-logind[1528]: New session 49 of user core. Aug 13 00:55:07.484658 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 00:55:07.786975 sshd[6952]: Connection closed by 147.75.109.163 port 59058 Aug 13 00:55:07.787471 sshd-session[6950]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:07.794849 systemd[1]: sshd@55-172.234.199.101:22-147.75.109.163:59058.service: Deactivated successfully. Aug 13 00:55:07.797157 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 00:55:07.799243 systemd-logind[1528]: Session 49 logged out. Waiting for processes to exit. Aug 13 00:55:07.800904 systemd-logind[1528]: Removed session 49. Aug 13 00:55:12.852502 systemd[1]: Started sshd@56-172.234.199.101:22-147.75.109.163:44322.service - OpenSSH per-connection server daemon (147.75.109.163:44322). Aug 13 00:55:13.200780 sshd[6964]: Accepted publickey for core from 147.75.109.163 port 44322 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:55:13.202612 sshd-session[6964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:55:13.206893 systemd-logind[1528]: New session 50 of user core. Aug 13 00:55:13.210721 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 00:55:13.531201 sshd[6966]: Connection closed by 147.75.109.163 port 44322 Aug 13 00:55:13.530977 sshd-session[6964]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:13.535820 systemd[1]: sshd@56-172.234.199.101:22-147.75.109.163:44322.service: Deactivated successfully. Aug 13 00:55:13.537927 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 00:55:13.539019 systemd-logind[1528]: Session 50 logged out. Waiting for processes to exit. Aug 13 00:55:13.541479 systemd-logind[1528]: Removed session 50. Aug 13 00:55:14.350220 kubelet[2714]: I0813 00:55:14.350186 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:55:14.350220 kubelet[2714]: I0813 00:55:14.350227 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:55:14.350878 kubelet[2714]: I0813 00:55:14.350342 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:55:14.350878 kubelet[2714]: E0813 00:55:14.350370 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:55:14.350878 kubelet[2714]: E0813 00:55:14.350383 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:55:14.350878 kubelet[2714]: E0813 00:55:14.350392 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:55:14.350878 kubelet[2714]: E0813 00:55:14.350400 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:55:14.350878 kubelet[2714]: E0813 00:55:14.350407 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:55:14.350878 kubelet[2714]: E0813 00:55:14.350414 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:55:14.350878 kubelet[2714]: E0813 00:55:14.350423 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:55:14.350878 kubelet[2714]: E0813 00:55:14.350434 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:55:14.350878 kubelet[2714]: E0813 00:55:14.350636 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:55:14.350878 kubelet[2714]: E0813 00:55:14.350653 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:55:14.350878 kubelet[2714]: I0813 00:55:14.350668 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:55:17.958639 kubelet[2714]: E0813 00:55:17.958577 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b" Aug 13 00:55:18.596591 systemd[1]: Started sshd@57-172.234.199.101:22-147.75.109.163:34156.service - OpenSSH per-connection server daemon (147.75.109.163:34156). Aug 13 00:55:18.949431 sshd[6980]: Accepted publickey for core from 147.75.109.163 port 34156 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:55:18.951928 sshd-session[6980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:55:18.960506 systemd-logind[1528]: New session 51 of user core. Aug 13 00:55:18.967698 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 00:55:19.321697 sshd[6982]: Connection closed by 147.75.109.163 port 34156 Aug 13 00:55:19.322752 sshd-session[6980]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:19.328297 systemd-logind[1528]: Session 51 logged out. Waiting for processes to exit. Aug 13 00:55:19.332470 systemd[1]: sshd@57-172.234.199.101:22-147.75.109.163:34156.service: Deactivated successfully. Aug 13 00:55:19.334389 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 00:55:19.345334 systemd-logind[1528]: Removed session 51. Aug 13 00:55:24.376713 kubelet[2714]: I0813 00:55:24.376679 2714 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:55:24.376713 kubelet[2714]: I0813 00:55:24.376720 2714 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:55:24.378506 kubelet[2714]: I0813 00:55:24.378486 2714 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:55:24.386051 systemd[1]: Started sshd@58-172.234.199.101:22-147.75.109.163:34160.service - OpenSSH per-connection server daemon (147.75.109.163:34160). Aug 13 00:55:24.400901 kubelet[2714]: I0813 00:55:24.400865 2714 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:55:24.401020 kubelet[2714]: I0813 00:55:24.400998 2714 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6d647ccb87-wkv5s","calico-system/calico-typha-644589c98-5v7wp","kube-system/coredns-7c65d6cfc9-hxx58","kube-system/coredns-7c65d6cfc9-dnlsw","calico-system/calico-node-x7x94","kube-system/kube-controller-manager-172-234-199-101","kube-system/kube-proxy-vxgfg","calico-system/csi-node-driver-mmxc6","kube-system/kube-apiserver-172-234-199-101","kube-system/kube-scheduler-172-234-199-101"] Aug 13 00:55:24.401100 kubelet[2714]: E0813 00:55:24.401033 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" Aug 13 00:55:24.401100 kubelet[2714]: E0813 00:55:24.401048 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-644589c98-5v7wp" Aug 13 00:55:24.401100 kubelet[2714]: E0813 00:55:24.401056 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-hxx58" Aug 13 00:55:24.401100 kubelet[2714]: E0813 00:55:24.401064 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dnlsw" Aug 13 00:55:24.401100 kubelet[2714]: E0813 00:55:24.401072 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-x7x94" Aug 13 00:55:24.401100 kubelet[2714]: E0813 00:55:24.401080 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-101" Aug 13 00:55:24.401100 kubelet[2714]: E0813 00:55:24.401087 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-vxgfg" Aug 13 00:55:24.401100 kubelet[2714]: E0813 00:55:24.401098 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-mmxc6" Aug 13 00:55:24.401100 kubelet[2714]: E0813 00:55:24.401107 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-101" Aug 13 00:55:24.401292 kubelet[2714]: E0813 00:55:24.401115 2714 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-101" Aug 13 00:55:24.401292 kubelet[2714]: I0813 00:55:24.401124 2714 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:55:24.737803 sshd[6994]: Accepted publickey for core from 147.75.109.163 port 34160 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:55:24.739813 sshd-session[6994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:55:24.745654 systemd-logind[1528]: New session 52 of user core. Aug 13 00:55:24.752658 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 00:55:25.048244 sshd[6996]: Connection closed by 147.75.109.163 port 34160 Aug 13 00:55:25.049715 sshd-session[6994]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:25.054343 systemd[1]: sshd@58-172.234.199.101:22-147.75.109.163:34160.service: Deactivated successfully. Aug 13 00:55:25.056914 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 00:55:25.058209 systemd-logind[1528]: Session 52 logged out. Waiting for processes to exit. Aug 13 00:55:25.060509 systemd-logind[1528]: Removed session 52. Aug 13 00:55:26.243925 containerd[1575]: time="2025-08-13T00:55:26.243880200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9345d3fc4e163aea4299b925fbb68f1fc68bf5ae7a05616fb050ef225cf5d900\" id:\"f4669195527b5ec6ae14b4dbb787f4d4bef9f504dfe7e5613610077a7352d382\" pid:7021 exited_at:{seconds:1755046526 nanos:242936222}" Aug 13 00:55:31.958645 kubelet[2714]: E0813 00:55:31.958596 2714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-6d647ccb87-wkv5s" podUID="2addd270-7ed2-4caf-9455-ca6a63f6fe8b"