May 15 12:51:35.905336 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 12:51:35.905358 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:51:35.905367 kernel: BIOS-provided physical RAM map: May 15 12:51:35.905376 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 15 12:51:35.905382 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 15 12:51:35.905387 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 12:51:35.905394 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 15 12:51:35.905400 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 15 12:51:35.905406 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 12:51:35.905412 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 12:51:35.905418 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 12:51:35.905424 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 12:51:35.905432 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 15 12:51:35.905438 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 12:51:35.905445 kernel: NX (Execute Disable) protection: active May 15 12:51:35.905452 kernel: APIC: Static calls initialized May 15 12:51:35.905458 kernel: SMBIOS 2.8 present. May 15 12:51:35.905466 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 15 12:51:35.905473 kernel: DMI: Memory slots populated: 1/1 May 15 12:51:35.905479 kernel: Hypervisor detected: KVM May 15 12:51:35.905485 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 12:51:35.905492 kernel: kvm-clock: using sched offset of 6239913794 cycles May 15 12:51:35.905498 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 12:51:35.905505 kernel: tsc: Detected 1999.999 MHz processor May 15 12:51:35.905512 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 12:51:35.905519 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 12:51:35.905525 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 15 12:51:35.905534 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 12:51:35.905541 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 12:51:35.905547 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 15 12:51:35.905554 kernel: Using GB pages for direct mapping May 15 12:51:35.905560 kernel: ACPI: Early table checksum verification disabled May 15 12:51:35.905567 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 15 12:51:35.905573 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:35.905580 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:35.905586 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:35.905595 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 15 12:51:35.905602 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:35.905608 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:35.905615 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:35.905624 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:35.905631 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 15 12:51:35.905640 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 15 12:51:35.905647 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 15 12:51:35.905654 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 15 12:51:35.905661 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 15 12:51:35.905668 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 15 12:51:35.905675 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 15 12:51:35.905682 kernel: No NUMA configuration found May 15 12:51:35.905688 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 15 12:51:35.905697 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] May 15 12:51:35.905703 kernel: Zone ranges: May 15 12:51:35.905710 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 12:51:35.905717 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 12:51:35.905723 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 15 12:51:35.905730 kernel: Device empty May 15 12:51:35.905736 kernel: Movable zone start for each node May 15 12:51:35.905743 kernel: Early memory node ranges May 15 12:51:35.905749 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 12:51:35.905756 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 15 12:51:35.905765 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 15 12:51:35.905959 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 15 12:51:35.905966 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 12:51:35.905972 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 12:51:35.905979 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 15 12:51:35.905986 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 12:51:35.905992 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 12:51:35.905999 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 12:51:35.906005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 12:51:35.906014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 12:51:35.906020 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 12:51:35.906027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 12:51:35.906033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 12:51:35.906040 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 12:51:35.906046 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 12:51:35.906053 kernel: TSC deadline timer available May 15 12:51:35.906059 kernel: CPU topo: Max. logical packages: 1 May 15 12:51:35.906066 kernel: CPU topo: Max. logical dies: 1 May 15 12:51:35.906074 kernel: CPU topo: Max. dies per package: 1 May 15 12:51:35.906081 kernel: CPU topo: Max. threads per core: 1 May 15 12:51:35.906087 kernel: CPU topo: Num. cores per package: 2 May 15 12:51:35.906094 kernel: CPU topo: Num. threads per package: 2 May 15 12:51:35.906100 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 15 12:51:35.906107 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 12:51:35.906113 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 12:51:35.906119 kernel: kvm-guest: setup PV sched yield May 15 12:51:35.906126 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 12:51:35.906134 kernel: Booting paravirtualized kernel on KVM May 15 12:51:35.906141 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 12:51:35.906148 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 12:51:35.906154 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 15 12:51:35.906161 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 15 12:51:35.906167 kernel: pcpu-alloc: [0] 0 1 May 15 12:51:35.906174 kernel: kvm-guest: PV spinlocks enabled May 15 12:51:35.906180 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 12:51:35.906188 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:51:35.906196 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 12:51:35.906203 kernel: random: crng init done May 15 12:51:35.906209 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 12:51:35.906216 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 12:51:35.906222 kernel: Fallback order for Node 0: 0 May 15 12:51:35.906229 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 15 12:51:35.906235 kernel: Policy zone: Normal May 15 12:51:35.906242 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 12:51:35.906250 kernel: software IO TLB: area num 2. May 15 12:51:35.907401 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 12:51:35.907415 kernel: ftrace: allocating 40065 entries in 157 pages May 15 12:51:35.907426 kernel: ftrace: allocated 157 pages with 5 groups May 15 12:51:35.907437 kernel: Dynamic Preempt: voluntary May 15 12:51:35.907448 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 12:51:35.907465 kernel: rcu: RCU event tracing is enabled. May 15 12:51:35.907478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 12:51:35.907490 kernel: Trampoline variant of Tasks RCU enabled. May 15 12:51:35.907502 kernel: Rude variant of Tasks RCU enabled. May 15 12:51:35.907519 kernel: Tracing variant of Tasks RCU enabled. May 15 12:51:35.907530 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 12:51:35.907540 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 12:51:35.907552 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:51:35.907572 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:51:35.907586 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:51:35.907597 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 12:51:35.907609 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 12:51:35.907620 kernel: Console: colour VGA+ 80x25 May 15 12:51:35.907633 kernel: printk: legacy console [tty0] enabled May 15 12:51:35.907645 kernel: printk: legacy console [ttyS0] enabled May 15 12:51:35.907661 kernel: ACPI: Core revision 20240827 May 15 12:51:35.907674 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 12:51:35.907686 kernel: APIC: Switch to symmetric I/O mode setup May 15 12:51:35.907699 kernel: x2apic enabled May 15 12:51:35.907712 kernel: APIC: Switched APIC routing to: physical x2apic May 15 12:51:35.907728 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 12:51:35.907740 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 12:51:35.907751 kernel: kvm-guest: setup PV IPIs May 15 12:51:35.907762 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 12:51:35.907963 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns May 15 12:51:35.907975 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) May 15 12:51:35.907986 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 12:51:35.907998 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 12:51:35.908008 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 12:51:35.908023 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 12:51:35.908035 kernel: Spectre V2 : Mitigation: Retpolines May 15 12:51:35.908046 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 12:51:35.908057 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 12:51:35.908069 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 15 12:51:35.908081 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 12:51:35.908093 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 12:51:35.908104 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 12:51:35.908117 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 12:51:35.908132 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 12:51:35.908144 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 12:51:35.908155 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 12:51:35.908167 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 12:51:35.908176 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 15 12:51:35.908183 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 12:51:35.908190 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 15 12:51:35.908197 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 15 12:51:35.908206 kernel: Freeing SMP alternatives memory: 32K May 15 12:51:35.908213 kernel: pid_max: default: 32768 minimum: 301 May 15 12:51:35.908220 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 12:51:35.908227 kernel: landlock: Up and running. May 15 12:51:35.908234 kernel: SELinux: Initializing. May 15 12:51:35.908241 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:51:35.908248 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:51:35.908272 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 15 12:51:35.908279 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 12:51:35.908289 kernel: ... version: 0 May 15 12:51:35.908295 kernel: ... bit width: 48 May 15 12:51:35.908302 kernel: ... generic registers: 6 May 15 12:51:35.908309 kernel: ... value mask: 0000ffffffffffff May 15 12:51:35.908316 kernel: ... max period: 00007fffffffffff May 15 12:51:35.908323 kernel: ... fixed-purpose events: 0 May 15 12:51:35.908330 kernel: ... event mask: 000000000000003f May 15 12:51:35.908336 kernel: signal: max sigframe size: 3376 May 15 12:51:35.908343 kernel: rcu: Hierarchical SRCU implementation. May 15 12:51:35.908352 kernel: rcu: Max phase no-delay instances is 400. May 15 12:51:35.908359 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 12:51:35.908366 kernel: smp: Bringing up secondary CPUs ... May 15 12:51:35.908373 kernel: smpboot: x86: Booting SMP configuration: May 15 12:51:35.908380 kernel: .... node #0, CPUs: #1 May 15 12:51:35.908386 kernel: smp: Brought up 1 node, 2 CPUs May 15 12:51:35.908393 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) May 15 12:51:35.908401 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 227296K reserved, 0K cma-reserved) May 15 12:51:35.908408 kernel: devtmpfs: initialized May 15 12:51:35.908416 kernel: x86/mm: Memory block size: 128MB May 15 12:51:35.908423 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 12:51:35.908430 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 12:51:35.908437 kernel: pinctrl core: initialized pinctrl subsystem May 15 12:51:35.908444 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 12:51:35.908451 kernel: audit: initializing netlink subsys (disabled) May 15 12:51:35.908458 kernel: audit: type=2000 audit(1747313493.088:1): state=initialized audit_enabled=0 res=1 May 15 12:51:35.908464 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 12:51:35.908471 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 12:51:35.908480 kernel: cpuidle: using governor menu May 15 12:51:35.908487 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 12:51:35.908494 kernel: dca service started, version 1.12.1 May 15 12:51:35.908500 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 15 12:51:35.908507 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 15 12:51:35.908514 kernel: PCI: Using configuration type 1 for base access May 15 12:51:35.908521 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 12:51:35.908528 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 12:51:35.908535 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 12:51:35.908544 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 12:51:35.908551 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 12:51:35.908557 kernel: ACPI: Added _OSI(Module Device) May 15 12:51:35.908564 kernel: ACPI: Added _OSI(Processor Device) May 15 12:51:35.908571 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 12:51:35.908578 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 12:51:35.908585 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 12:51:35.908592 kernel: ACPI: Interpreter enabled May 15 12:51:35.908598 kernel: ACPI: PM: (supports S0 S3 S5) May 15 12:51:35.908607 kernel: ACPI: Using IOAPIC for interrupt routing May 15 12:51:35.908614 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 12:51:35.908622 kernel: PCI: Using E820 reservations for host bridge windows May 15 12:51:35.908628 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 12:51:35.908635 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 12:51:35.908819 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 12:51:35.908933 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 12:51:35.909039 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 12:51:35.909053 kernel: PCI host bridge to bus 0000:00 May 15 12:51:35.909167 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 12:51:35.910387 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 12:51:35.910501 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 12:51:35.910600 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 15 12:51:35.910723 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 12:51:35.912185 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 15 12:51:35.912345 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 12:51:35.912528 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 15 12:51:35.912659 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 15 12:51:35.912769 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 15 12:51:35.912874 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 15 12:51:35.912978 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 15 12:51:35.913135 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 12:51:35.913327 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 15 12:51:35.913450 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] May 15 12:51:35.913583 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 15 12:51:35.913696 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 15 12:51:35.913812 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 12:51:35.913918 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 15 12:51:35.914028 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 15 12:51:35.914132 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 15 12:51:35.914236 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 15 12:51:35.914371 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 15 12:51:35.914478 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 12:51:35.914591 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 15 12:51:35.916209 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] May 15 12:51:35.916477 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] May 15 12:51:35.916669 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 15 12:51:35.916833 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 15 12:51:35.919299 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 12:51:35.919312 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 12:51:35.919320 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 12:51:35.919327 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 12:51:35.919338 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 12:51:35.919345 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 12:51:35.919352 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 12:51:35.919358 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 12:51:35.919365 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 12:51:35.919372 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 12:51:35.919379 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 12:51:35.919386 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 12:51:35.919392 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 12:51:35.919401 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 12:51:35.919408 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 12:51:35.919415 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 12:51:35.919422 kernel: iommu: Default domain type: Translated May 15 12:51:35.919429 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 12:51:35.919435 kernel: PCI: Using ACPI for IRQ routing May 15 12:51:35.919442 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 12:51:35.919449 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 15 12:51:35.919456 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 15 12:51:35.919629 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 12:51:35.919746 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 12:51:35.919982 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 12:51:35.920044 kernel: vgaarb: loaded May 15 12:51:35.920051 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 12:51:35.920059 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 12:51:35.920066 kernel: clocksource: Switched to clocksource kvm-clock May 15 12:51:35.920073 kernel: VFS: Disk quotas dquot_6.6.0 May 15 12:51:35.920083 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 12:51:35.920090 kernel: pnp: PnP ACPI init May 15 12:51:35.920218 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 12:51:35.920229 kernel: pnp: PnP ACPI: found 5 devices May 15 12:51:35.920236 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 12:51:35.920243 kernel: NET: Registered PF_INET protocol family May 15 12:51:35.920272 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 12:51:35.920279 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 12:51:35.920290 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 12:51:35.920297 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 12:51:35.920304 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 12:51:35.920310 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 12:51:35.920317 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:51:35.920324 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:51:35.920331 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 12:51:35.920338 kernel: NET: Registered PF_XDP protocol family May 15 12:51:35.920441 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 12:51:35.920542 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 12:51:35.920637 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 12:51:35.920732 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 15 12:51:35.920827 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 12:51:35.920921 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 15 12:51:35.920930 kernel: PCI: CLS 0 bytes, default 64 May 15 12:51:35.920937 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 12:51:35.920944 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 15 12:51:35.920954 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns May 15 12:51:35.920962 kernel: Initialise system trusted keyrings May 15 12:51:35.920969 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 12:51:35.920976 kernel: Key type asymmetric registered May 15 12:51:35.920982 kernel: Asymmetric key parser 'x509' registered May 15 12:51:35.920990 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 12:51:35.920997 kernel: io scheduler mq-deadline registered May 15 12:51:35.921004 kernel: io scheduler kyber registered May 15 12:51:35.921011 kernel: io scheduler bfq registered May 15 12:51:35.921020 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 12:51:35.921027 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 12:51:35.921034 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 12:51:35.921042 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 12:51:35.921048 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 12:51:35.921055 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 12:51:35.921062 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 12:51:35.921069 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 12:51:35.921182 kernel: rtc_cmos 00:03: RTC can wake from S4 May 15 12:51:35.921196 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 12:51:35.921343 kernel: rtc_cmos 00:03: registered as rtc0 May 15 12:51:35.921447 kernel: rtc_cmos 00:03: setting system clock to 2025-05-15T12:51:35 UTC (1747313495) May 15 12:51:35.921552 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 12:51:35.921562 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 12:51:35.921569 kernel: NET: Registered PF_INET6 protocol family May 15 12:51:35.921576 kernel: Segment Routing with IPv6 May 15 12:51:35.921583 kernel: In-situ OAM (IOAM) with IPv6 May 15 12:51:35.921593 kernel: NET: Registered PF_PACKET protocol family May 15 12:51:35.921600 kernel: Key type dns_resolver registered May 15 12:51:35.921607 kernel: IPI shorthand broadcast: enabled May 15 12:51:35.921614 kernel: sched_clock: Marking stable (2984004892, 226319769)->(3250226874, -39902213) May 15 12:51:35.921622 kernel: registered taskstats version 1 May 15 12:51:35.921628 kernel: Loading compiled-in X.509 certificates May 15 12:51:35.921635 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 12:51:35.921643 kernel: Demotion targets for Node 0: null May 15 12:51:35.921649 kernel: Key type .fscrypt registered May 15 12:51:35.921658 kernel: Key type fscrypt-provisioning registered May 15 12:51:35.921665 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 12:51:35.921672 kernel: ima: Allocated hash algorithm: sha1 May 15 12:51:35.921678 kernel: ima: No architecture policies found May 15 12:51:35.921685 kernel: clk: Disabling unused clocks May 15 12:51:35.921692 kernel: Warning: unable to open an initial console. May 15 12:51:35.921699 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 12:51:35.921706 kernel: Write protecting the kernel read-only data: 24576k May 15 12:51:35.921713 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 12:51:35.921722 kernel: Run /init as init process May 15 12:51:35.921729 kernel: with arguments: May 15 12:51:35.921735 kernel: /init May 15 12:51:35.921742 kernel: with environment: May 15 12:51:35.921749 kernel: HOME=/ May 15 12:51:35.921770 kernel: TERM=linux May 15 12:51:35.921779 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 12:51:35.921787 systemd[1]: Successfully made /usr/ read-only. May 15 12:51:35.921800 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:51:35.921808 systemd[1]: Detected virtualization kvm. May 15 12:51:35.921815 systemd[1]: Detected architecture x86-64. May 15 12:51:35.921822 systemd[1]: Running in initrd. May 15 12:51:35.921830 systemd[1]: No hostname configured, using default hostname. May 15 12:51:35.921837 systemd[1]: Hostname set to . May 15 12:51:35.921845 systemd[1]: Initializing machine ID from random generator. May 15 12:51:35.921852 systemd[1]: Queued start job for default target initrd.target. May 15 12:51:35.921863 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:51:35.921871 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:51:35.921880 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 12:51:35.921888 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:51:35.921896 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 12:51:35.921904 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 12:51:35.921912 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 12:51:35.921922 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 12:51:35.921930 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:51:35.921937 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:51:35.921945 systemd[1]: Reached target paths.target - Path Units. May 15 12:51:35.921952 systemd[1]: Reached target slices.target - Slice Units. May 15 12:51:35.921960 systemd[1]: Reached target swap.target - Swaps. May 15 12:51:35.921967 systemd[1]: Reached target timers.target - Timer Units. May 15 12:51:35.921975 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:51:35.921984 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:51:35.921991 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 12:51:35.921999 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 12:51:35.922006 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:51:35.922014 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:51:35.922021 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:51:35.922031 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:51:35.922039 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 12:51:35.922046 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:51:35.922054 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 12:51:35.922061 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 12:51:35.922069 systemd[1]: Starting systemd-fsck-usr.service... May 15 12:51:35.922077 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:51:35.922084 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:51:35.922094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:51:35.922101 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 12:51:35.922109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:51:35.922117 systemd[1]: Finished systemd-fsck-usr.service. May 15 12:51:35.922127 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 12:51:35.922154 systemd-journald[205]: Collecting audit messages is disabled. May 15 12:51:35.922173 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:51:35.922182 systemd-journald[205]: Journal started May 15 12:51:35.922209 systemd-journald[205]: Runtime Journal (/run/log/journal/cd80bf2c4065453eb7bdc70246498fbf) is 8M, max 78.5M, 70.5M free. May 15 12:51:35.913530 systemd-modules-load[207]: Inserted module 'overlay' May 15 12:51:35.950090 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:51:35.960490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:51:36.012155 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 12:51:36.012186 kernel: Bridge firewalling registered May 15 12:51:35.962787 systemd-modules-load[207]: Inserted module 'br_netfilter' May 15 12:51:36.022350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:51:36.025516 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:51:36.029362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:51:36.031210 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:51:36.036155 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 12:51:36.039282 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:51:36.045621 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 12:51:36.052135 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:51:36.060432 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:51:36.064386 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:51:36.068180 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:51:36.070619 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 12:51:36.092482 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:51:36.111789 systemd-resolved[243]: Positive Trust Anchors: May 15 12:51:36.112337 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:51:36.112365 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:51:36.117963 systemd-resolved[243]: Defaulting to hostname 'linux'. May 15 12:51:36.119082 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:51:36.119937 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:51:36.191302 kernel: SCSI subsystem initialized May 15 12:51:36.200340 kernel: Loading iSCSI transport class v2.0-870. May 15 12:51:36.211291 kernel: iscsi: registered transport (tcp) May 15 12:51:36.231723 kernel: iscsi: registered transport (qla4xxx) May 15 12:51:36.231766 kernel: QLogic iSCSI HBA Driver May 15 12:51:36.255382 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:51:36.275594 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:51:36.279368 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:51:36.348212 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 12:51:36.351571 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 12:51:36.408296 kernel: raid6: avx2x4 gen() 29551 MB/s May 15 12:51:36.426282 kernel: raid6: avx2x2 gen() 29077 MB/s May 15 12:51:36.444818 kernel: raid6: avx2x1 gen() 20452 MB/s May 15 12:51:36.444859 kernel: raid6: using algorithm avx2x4 gen() 29551 MB/s May 15 12:51:36.463628 kernel: raid6: .... xor() 4620 MB/s, rmw enabled May 15 12:51:36.463668 kernel: raid6: using avx2x2 recovery algorithm May 15 12:51:36.487301 kernel: xor: automatically using best checksumming function avx May 15 12:51:36.624309 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 12:51:36.634272 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 12:51:36.637763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:51:36.661906 systemd-udevd[455]: Using default interface naming scheme 'v255'. May 15 12:51:36.667538 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:51:36.670737 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 12:51:36.703037 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation May 15 12:51:36.745210 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:51:36.748180 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:51:36.819242 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:51:36.824364 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 12:51:36.911276 kernel: cryptd: max_cpu_qlen set to 1000 May 15 12:51:37.091625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:51:37.091752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:51:37.093433 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:51:37.099320 kernel: AES CTR mode by8 optimization enabled May 15 12:51:37.096203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:51:37.097783 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:51:37.108294 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 12:51:37.111274 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues May 15 12:51:37.122416 kernel: libata version 3.00 loaded. May 15 12:51:37.122443 kernel: scsi host0: Virtio SCSI HBA May 15 12:51:37.127166 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 15 12:51:37.127213 kernel: ahci 0000:00:1f.2: version 3.0 May 15 12:51:37.172443 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 12:51:37.172461 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 15 12:51:37.172618 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 15 12:51:37.172747 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 12:51:37.172870 kernel: scsi host1: ahci May 15 12:51:37.173020 kernel: scsi host2: ahci May 15 12:51:37.173153 kernel: scsi host3: ahci May 15 12:51:37.173311 kernel: scsi host4: ahci May 15 12:51:37.173443 kernel: scsi host5: ahci May 15 12:51:37.173823 kernel: scsi host6: ahci May 15 12:51:37.173955 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 May 15 12:51:37.173970 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 May 15 12:51:37.173979 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 May 15 12:51:37.173988 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 May 15 12:51:37.173997 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 May 15 12:51:37.174005 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 May 15 12:51:37.224376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:51:37.487500 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 12:51:37.487564 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 12:51:37.487587 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 12:51:37.487596 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 12:51:37.487606 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 15 12:51:37.492273 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 12:51:37.505171 kernel: sd 0:0:0:0: Power-on or device reset occurred May 15 12:51:37.553273 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 15 12:51:37.553429 kernel: sd 0:0:0:0: [sda] Write Protect is off May 15 12:51:37.553571 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 15 12:51:37.553706 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 15 12:51:37.553847 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 12:51:37.553857 kernel: GPT:9289727 != 167739391 May 15 12:51:37.553867 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 12:51:37.553876 kernel: GPT:9289727 != 167739391 May 15 12:51:37.553885 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 12:51:37.553894 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:51:37.553904 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 15 12:51:37.609085 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 15 12:51:37.618583 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 15 12:51:37.633677 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 15 12:51:37.634465 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 15 12:51:37.636360 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 12:51:37.646180 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 12:51:37.648556 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:51:37.649153 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:51:37.650465 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:51:37.652807 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 12:51:37.656406 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 12:51:37.677343 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 12:51:37.682285 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:51:37.683520 disk-uuid[632]: Primary Header is updated. May 15 12:51:37.683520 disk-uuid[632]: Secondary Entries is updated. May 15 12:51:37.683520 disk-uuid[632]: Secondary Header is updated. May 15 12:51:38.707334 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:51:38.708352 disk-uuid[640]: The operation has completed successfully. May 15 12:51:38.758465 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 12:51:38.758585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 12:51:38.788189 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 12:51:38.803616 sh[654]: Success May 15 12:51:38.822686 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 12:51:38.822727 kernel: device-mapper: uevent: version 1.0.3 May 15 12:51:38.823640 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 12:51:38.835359 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 15 12:51:38.877761 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 12:51:38.882321 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 12:51:38.891133 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 12:51:38.903277 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 12:51:38.906269 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (666) May 15 12:51:38.911658 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 12:51:38.911680 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 12:51:38.911692 kernel: BTRFS info (device dm-0): using free-space-tree May 15 12:51:38.928889 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 12:51:38.929583 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 12:51:38.930983 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 12:51:38.931709 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 12:51:38.935354 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 12:51:38.967292 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (699) May 15 12:51:38.967353 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:51:38.971603 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:51:38.971631 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:51:38.984397 kernel: BTRFS info (device sda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:51:38.985335 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 12:51:38.987425 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 12:51:39.064099 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:51:39.070227 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:51:39.092716 ignition[764]: Ignition 2.21.0 May 15 12:51:39.092729 ignition[764]: Stage: fetch-offline May 15 12:51:39.094978 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:51:39.092761 ignition[764]: no configs at "/usr/lib/ignition/base.d" May 15 12:51:39.092770 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:39.092843 ignition[764]: parsed url from cmdline: "" May 15 12:51:39.092847 ignition[764]: no config URL provided May 15 12:51:39.092851 ignition[764]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:51:39.092859 ignition[764]: no config at "/usr/lib/ignition/user.ign" May 15 12:51:39.092863 ignition[764]: failed to fetch config: resource requires networking May 15 12:51:39.093172 ignition[764]: Ignition finished successfully May 15 12:51:39.112308 systemd-networkd[841]: lo: Link UP May 15 12:51:39.112318 systemd-networkd[841]: lo: Gained carrier May 15 12:51:39.113726 systemd-networkd[841]: Enumeration completed May 15 12:51:39.113807 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:51:39.114498 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:51:39.114502 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:51:39.115464 systemd[1]: Reached target network.target - Network. May 15 12:51:39.116602 systemd-networkd[841]: eth0: Link UP May 15 12:51:39.116606 systemd-networkd[841]: eth0: Gained carrier May 15 12:51:39.116621 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:51:39.120406 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 12:51:39.154971 ignition[845]: Ignition 2.21.0 May 15 12:51:39.155674 ignition[845]: Stage: fetch May 15 12:51:39.155818 ignition[845]: no configs at "/usr/lib/ignition/base.d" May 15 12:51:39.155828 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:39.155905 ignition[845]: parsed url from cmdline: "" May 15 12:51:39.155909 ignition[845]: no config URL provided May 15 12:51:39.155913 ignition[845]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:51:39.155921 ignition[845]: no config at "/usr/lib/ignition/user.ign" May 15 12:51:39.155957 ignition[845]: PUT http://169.254.169.254/v1/token: attempt #1 May 15 12:51:39.156180 ignition[845]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 12:51:39.356878 ignition[845]: PUT http://169.254.169.254/v1/token: attempt #2 May 15 12:51:39.357756 ignition[845]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 12:51:39.655335 systemd-networkd[841]: eth0: DHCPv4 address 172.232.9.197/24, gateway 172.232.9.1 acquired from 23.205.167.18 May 15 12:51:39.758428 ignition[845]: PUT http://169.254.169.254/v1/token: attempt #3 May 15 12:51:39.860593 ignition[845]: PUT result: OK May 15 12:51:39.860699 ignition[845]: GET http://169.254.169.254/v1/user-data: attempt #1 May 15 12:51:39.973652 ignition[845]: GET result: OK May 15 12:51:39.974361 ignition[845]: parsing config with SHA512: fec35ca5862d79aa49c40fa0ff2baaef744110037c9080916fc1997c11505d39b8a27e5e4c581b387729bfe2cfadc037008c96535104277866b2daa6cd8f9525 May 15 12:51:39.980580 unknown[845]: fetched base config from "system" May 15 12:51:39.980591 unknown[845]: fetched base config from "system" May 15 12:51:39.980860 ignition[845]: fetch: fetch complete May 15 12:51:39.980597 unknown[845]: fetched user config from "akamai" May 15 12:51:39.980865 ignition[845]: fetch: fetch passed May 15 12:51:39.980907 ignition[845]: Ignition finished successfully May 15 12:51:39.984784 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 12:51:39.986480 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 12:51:40.042356 ignition[852]: Ignition 2.21.0 May 15 12:51:40.042375 ignition[852]: Stage: kargs May 15 12:51:40.042549 ignition[852]: no configs at "/usr/lib/ignition/base.d" May 15 12:51:40.042565 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:40.043934 ignition[852]: kargs: kargs passed May 15 12:51:40.046162 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 12:51:40.043989 ignition[852]: Ignition finished successfully May 15 12:51:40.049398 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 12:51:40.077019 ignition[858]: Ignition 2.21.0 May 15 12:51:40.077038 ignition[858]: Stage: disks May 15 12:51:40.077214 ignition[858]: no configs at "/usr/lib/ignition/base.d" May 15 12:51:40.077228 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:40.078202 ignition[858]: disks: disks passed May 15 12:51:40.080233 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 12:51:40.078281 ignition[858]: Ignition finished successfully May 15 12:51:40.081594 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 12:51:40.082920 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 12:51:40.084159 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:51:40.085550 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:51:40.086991 systemd[1]: Reached target basic.target - Basic System. May 15 12:51:40.089938 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 12:51:40.112987 systemd-fsck[867]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 12:51:40.115814 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 12:51:40.119223 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 12:51:40.233278 kernel: EXT4-fs (sda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 12:51:40.234330 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 12:51:40.235391 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 12:51:40.237315 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:51:40.239107 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 12:51:40.241551 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 12:51:40.241604 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 12:51:40.241629 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:51:40.249663 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 12:51:40.253185 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 12:51:40.260588 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (875) May 15 12:51:40.260624 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:51:40.264628 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:51:40.264699 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:51:40.276479 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:51:40.304706 initrd-setup-root[899]: cut: /sysroot/etc/passwd: No such file or directory May 15 12:51:40.309365 initrd-setup-root[906]: cut: /sysroot/etc/group: No such file or directory May 15 12:51:40.312794 initrd-setup-root[913]: cut: /sysroot/etc/shadow: No such file or directory May 15 12:51:40.317707 initrd-setup-root[920]: cut: /sysroot/etc/gshadow: No such file or directory May 15 12:51:40.400878 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 12:51:40.403287 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 12:51:40.405013 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 12:51:40.415990 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 12:51:40.418528 kernel: BTRFS info (device sda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:51:40.433322 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 12:51:40.441952 ignition[990]: INFO : Ignition 2.21.0 May 15 12:51:40.442744 ignition[990]: INFO : Stage: mount May 15 12:51:40.443226 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:51:40.443226 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:40.444617 ignition[990]: INFO : mount: mount passed May 15 12:51:40.444617 ignition[990]: INFO : Ignition finished successfully May 15 12:51:40.445803 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 12:51:40.447575 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 12:51:40.931396 systemd-networkd[841]: eth0: Gained IPv6LL May 15 12:51:41.236226 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:51:41.262731 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (1001) May 15 12:51:41.262765 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:51:41.265475 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:51:41.269049 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:51:41.273737 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:51:41.303655 ignition[1017]: INFO : Ignition 2.21.0 May 15 12:51:41.303655 ignition[1017]: INFO : Stage: files May 15 12:51:41.304909 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:51:41.304909 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:41.307327 ignition[1017]: DEBUG : files: compiled without relabeling support, skipping May 15 12:51:41.308425 ignition[1017]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 12:51:41.308425 ignition[1017]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 12:51:41.310664 ignition[1017]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 12:51:41.311518 ignition[1017]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 12:51:41.311518 ignition[1017]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 12:51:41.311106 unknown[1017]: wrote ssh authorized keys file for user: core May 15 12:51:41.314040 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 12:51:41.314040 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 12:51:41.517247 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 12:51:41.822571 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 12:51:41.822571 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 12:51:41.824995 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 12:51:41.824995 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 12:51:41.824995 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 12:51:41.824995 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:51:41.824995 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:51:41.824995 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:51:41.824995 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:51:41.831127 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:51:41.831127 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:51:41.831127 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 12:51:41.831127 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 12:51:41.831127 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 12:51:41.831127 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 15 12:51:42.207184 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 12:51:42.402274 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 12:51:42.402274 ignition[1017]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 12:51:42.404607 ignition[1017]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:51:42.405653 ignition[1017]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:51:42.405653 ignition[1017]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 12:51:42.405653 ignition[1017]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 12:51:42.405653 ignition[1017]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 12:51:42.405653 ignition[1017]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 12:51:42.405653 ignition[1017]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 12:51:42.405653 ignition[1017]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 15 12:51:42.405653 ignition[1017]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 15 12:51:42.405653 ignition[1017]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 12:51:42.419386 ignition[1017]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 12:51:42.419386 ignition[1017]: INFO : files: files passed May 15 12:51:42.419386 ignition[1017]: INFO : Ignition finished successfully May 15 12:51:42.408759 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 12:51:42.411175 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 12:51:42.415373 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 12:51:42.426540 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 12:51:42.426638 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 12:51:42.433392 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:51:42.433392 initrd-setup-root-after-ignition[1048]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 12:51:42.435778 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:51:42.436524 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:51:42.437970 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 12:51:42.439714 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 12:51:42.495443 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 12:51:42.495562 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 12:51:42.496892 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 12:51:42.498060 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 12:51:42.499548 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 12:51:42.500411 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 12:51:42.532730 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:51:42.535684 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 12:51:42.551375 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 12:51:42.552782 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:51:42.554217 systemd[1]: Stopped target timers.target - Timer Units. May 15 12:51:42.554837 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 12:51:42.554931 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:51:42.556164 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 12:51:42.556929 systemd[1]: Stopped target basic.target - Basic System. May 15 12:51:42.558231 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 12:51:42.559338 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:51:42.560465 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 12:51:42.561622 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 12:51:42.563093 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 12:51:42.564263 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:51:42.565657 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 12:51:42.566837 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 12:51:42.568083 systemd[1]: Stopped target swap.target - Swaps. May 15 12:51:42.569154 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 12:51:42.569270 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 12:51:42.570735 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 12:51:42.571533 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:51:42.572592 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 12:51:42.572674 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:51:42.573998 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 12:51:42.574121 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 12:51:42.575649 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 12:51:42.575755 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:51:42.576686 systemd[1]: ignition-files.service: Deactivated successfully. May 15 12:51:42.576810 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 12:51:42.580329 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 12:51:42.581878 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 12:51:42.583596 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 12:51:42.583739 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:51:42.584820 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 12:51:42.585112 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:51:42.589600 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 12:51:42.593905 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 12:51:42.607082 ignition[1072]: INFO : Ignition 2.21.0 May 15 12:51:42.607082 ignition[1072]: INFO : Stage: umount May 15 12:51:42.608312 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:51:42.608312 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:42.611743 ignition[1072]: INFO : umount: umount passed May 15 12:51:42.611743 ignition[1072]: INFO : Ignition finished successfully May 15 12:51:42.611239 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 12:51:42.611391 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 12:51:42.613031 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 12:51:42.613078 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 12:51:42.614385 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 12:51:42.614433 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 12:51:42.637685 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 12:51:42.637732 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 12:51:42.638697 systemd[1]: Stopped target network.target - Network. May 15 12:51:42.639645 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 12:51:42.639694 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:51:42.640719 systemd[1]: Stopped target paths.target - Path Units. May 15 12:51:42.641699 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 12:51:42.647337 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:51:42.648111 systemd[1]: Stopped target slices.target - Slice Units. May 15 12:51:42.649586 systemd[1]: Stopped target sockets.target - Socket Units. May 15 12:51:42.651156 systemd[1]: iscsid.socket: Deactivated successfully. May 15 12:51:42.651196 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:51:42.652544 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 12:51:42.652579 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:51:42.653912 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 12:51:42.653962 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 12:51:42.655306 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 12:51:42.655348 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 12:51:42.656837 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 12:51:42.657939 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 12:51:42.660873 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 12:51:42.661646 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 12:51:42.661744 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 12:51:42.663499 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 12:51:42.663563 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 12:51:42.666845 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 12:51:42.666969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 12:51:42.671793 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 12:51:42.672389 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 12:51:42.672511 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 12:51:42.674392 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 12:51:42.675523 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 12:51:42.676348 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 12:51:42.676414 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 12:51:42.678211 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 12:51:42.679566 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 12:51:42.679614 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:51:42.682982 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 12:51:42.683029 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 12:51:42.684217 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 12:51:42.684284 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 12:51:42.685274 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 12:51:42.685321 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:51:42.687043 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:51:42.691011 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 12:51:42.691071 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 12:51:42.705388 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 12:51:42.705495 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 12:51:42.706831 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 12:51:42.707217 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:51:42.708465 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 12:51:42.708526 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 12:51:42.710154 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 12:51:42.710189 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:51:42.711570 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 12:51:42.711616 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 12:51:42.713536 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 12:51:42.713582 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 12:51:42.714774 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 12:51:42.714821 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:51:42.717367 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 12:51:42.718323 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 12:51:42.718373 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:51:42.721427 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 12:51:42.721479 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:51:42.723483 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 12:51:42.723529 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:51:42.724727 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 12:51:42.724772 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:51:42.725535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:51:42.725578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:51:42.727306 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 15 12:51:42.727359 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 15 12:51:42.727400 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 12:51:42.727442 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:51:42.732855 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 12:51:42.732977 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 12:51:42.733881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 12:51:42.735535 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 12:51:42.749495 systemd[1]: Switching root. May 15 12:51:42.785715 systemd-journald[205]: Journal stopped May 15 12:51:43.945494 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). May 15 12:51:43.945520 kernel: SELinux: policy capability network_peer_controls=1 May 15 12:51:43.945532 kernel: SELinux: policy capability open_perms=1 May 15 12:51:43.945544 kernel: SELinux: policy capability extended_socket_class=1 May 15 12:51:43.945552 kernel: SELinux: policy capability always_check_network=0 May 15 12:51:43.945561 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 12:51:43.945570 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 12:51:43.945579 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 12:51:43.945587 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 12:51:43.945597 kernel: SELinux: policy capability userspace_initial_context=0 May 15 12:51:43.945611 kernel: audit: type=1403 audit(1747313502.957:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 12:51:43.945621 systemd[1]: Successfully loaded SELinux policy in 80.983ms. May 15 12:51:43.945632 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.191ms. May 15 12:51:43.945642 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:51:43.945653 systemd[1]: Detected virtualization kvm. May 15 12:51:43.945664 systemd[1]: Detected architecture x86-64. May 15 12:51:43.945674 systemd[1]: Detected first boot. May 15 12:51:43.945684 systemd[1]: Initializing machine ID from random generator. May 15 12:51:43.945694 zram_generator::config[1116]: No configuration found. May 15 12:51:43.945704 kernel: Guest personality initialized and is inactive May 15 12:51:43.945713 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 12:51:43.945722 kernel: Initialized host personality May 15 12:51:43.945733 kernel: NET: Registered PF_VSOCK protocol family May 15 12:51:43.945742 systemd[1]: Populated /etc with preset unit settings. May 15 12:51:43.945753 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 12:51:43.945762 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 12:51:43.945772 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 12:51:43.945782 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 12:51:43.945791 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 12:51:43.945803 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 12:51:43.945813 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 12:51:43.945822 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 12:51:43.945832 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 12:51:43.945842 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 12:51:43.945852 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 12:51:43.945862 systemd[1]: Created slice user.slice - User and Session Slice. May 15 12:51:43.945873 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:51:43.945883 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:51:43.945893 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 12:51:43.945903 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 12:51:43.945916 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 12:51:43.945926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:51:43.945936 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 12:51:43.945946 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:51:43.945958 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:51:43.945969 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 12:51:43.945979 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 12:51:43.945989 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 12:51:43.945999 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 12:51:43.946009 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:51:43.946019 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:51:43.946029 systemd[1]: Reached target slices.target - Slice Units. May 15 12:51:43.946040 systemd[1]: Reached target swap.target - Swaps. May 15 12:51:43.946050 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 12:51:43.946060 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 12:51:43.946070 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 12:51:43.946081 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:51:43.946094 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:51:43.946104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:51:43.946114 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 12:51:43.946124 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 12:51:43.946134 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 12:51:43.946144 systemd[1]: Mounting media.mount - External Media Directory... May 15 12:51:43.946155 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:43.946165 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 12:51:43.946176 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 12:51:43.946205 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 12:51:43.946216 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 12:51:43.946227 systemd[1]: Reached target machines.target - Containers. May 15 12:51:43.946237 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 12:51:43.946247 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:51:43.946278 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:51:43.946288 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 12:51:43.946307 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:51:43.946317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:51:43.946327 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:51:43.946338 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 12:51:43.946348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:51:43.946383 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 12:51:43.946394 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 12:51:43.946404 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 12:51:43.946414 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 12:51:43.946432 systemd[1]: Stopped systemd-fsck-usr.service. May 15 12:51:43.946444 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:51:43.946454 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:51:43.946464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:51:43.946475 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:51:43.946484 kernel: loop: module loaded May 15 12:51:43.946494 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 12:51:43.946505 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 12:51:43.946517 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:51:43.946527 systemd[1]: verity-setup.service: Deactivated successfully. May 15 12:51:43.946537 systemd[1]: Stopped verity-setup.service. May 15 12:51:43.946547 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:43.946558 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 12:51:43.946568 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 12:51:43.946578 systemd[1]: Mounted media.mount - External Media Directory. May 15 12:51:43.946588 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 12:51:43.946600 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 12:51:43.946610 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 12:51:43.946620 kernel: fuse: init (API version 7.41) May 15 12:51:43.946630 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:51:43.946640 kernel: ACPI: bus type drm_connector registered May 15 12:51:43.946649 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 12:51:43.946660 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 12:51:43.946670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:51:43.946680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:51:43.946692 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:51:43.946703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:51:43.946713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:51:43.946723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:51:43.946733 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 12:51:43.946743 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 12:51:43.946753 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:51:43.946762 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:51:43.946772 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 12:51:43.946804 systemd-journald[1200]: Collecting audit messages is disabled. May 15 12:51:43.946827 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:51:43.946842 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:51:43.946853 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 12:51:43.946863 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 12:51:43.946875 systemd-journald[1200]: Journal started May 15 12:51:43.946894 systemd-journald[1200]: Runtime Journal (/run/log/journal/15142cbab73a44ff83702f9dde25db96) is 8M, max 78.5M, 70.5M free. May 15 12:51:43.532823 systemd[1]: Queued start job for default target multi-user.target. May 15 12:51:43.553288 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 15 12:51:43.553815 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 12:51:43.950795 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:51:43.965939 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:51:43.970528 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 12:51:43.973333 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 12:51:43.973959 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 12:51:43.974042 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:51:43.977322 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 12:51:43.988353 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 12:51:43.989624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:51:43.997931 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 12:51:44.000844 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 12:51:44.002334 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:51:44.006000 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 12:51:44.007075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:51:44.010060 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:51:44.014399 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 12:51:44.021077 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 12:51:44.029932 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 12:51:44.031404 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 12:51:44.047324 systemd-journald[1200]: Time spent on flushing to /var/log/journal/15142cbab73a44ff83702f9dde25db96 is 57.303ms for 1000 entries. May 15 12:51:44.047324 systemd-journald[1200]: System Journal (/var/log/journal/15142cbab73a44ff83702f9dde25db96) is 8M, max 195.6M, 187.6M free. May 15 12:51:44.128095 systemd-journald[1200]: Received client request to flush runtime journal. May 15 12:51:44.128129 kernel: loop0: detected capacity change from 0 to 146240 May 15 12:51:44.128143 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 12:51:44.064978 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 12:51:44.065686 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 12:51:44.070814 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 12:51:44.085003 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:51:44.086233 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:51:44.100247 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 15 12:51:44.101295 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 15 12:51:44.118701 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 12:51:44.122754 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:51:44.125864 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 12:51:44.130310 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 12:51:44.152497 kernel: loop1: detected capacity change from 0 to 8 May 15 12:51:44.174299 kernel: loop2: detected capacity change from 0 to 113872 May 15 12:51:44.196993 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 12:51:44.199312 kernel: loop3: detected capacity change from 0 to 205544 May 15 12:51:44.202734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:51:44.234745 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 15 12:51:44.234763 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 15 12:51:44.240858 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:51:44.255636 kernel: loop4: detected capacity change from 0 to 146240 May 15 12:51:44.278288 kernel: loop5: detected capacity change from 0 to 8 May 15 12:51:44.281286 kernel: loop6: detected capacity change from 0 to 113872 May 15 12:51:44.302321 kernel: loop7: detected capacity change from 0 to 205544 May 15 12:51:44.324193 (sd-merge)[1268]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 15 12:51:44.326493 (sd-merge)[1268]: Merged extensions into '/usr'. May 15 12:51:44.331177 systemd[1]: Reload requested from client PID 1241 ('systemd-sysext') (unit systemd-sysext.service)... May 15 12:51:44.331247 systemd[1]: Reloading... May 15 12:51:44.463343 zram_generator::config[1301]: No configuration found. May 15 12:51:44.569808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:51:44.610728 ldconfig[1236]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 12:51:44.647836 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 12:51:44.648562 systemd[1]: Reloading finished in 316 ms. May 15 12:51:44.663439 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 12:51:44.664380 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 12:51:44.675624 systemd[1]: Starting ensure-sysext.service... May 15 12:51:44.681399 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:51:44.709772 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... May 15 12:51:44.709789 systemd[1]: Reloading... May 15 12:51:44.724900 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 12:51:44.725763 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 12:51:44.726234 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 12:51:44.726628 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 12:51:44.730221 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 12:51:44.730510 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 15 12:51:44.730571 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 15 12:51:44.738774 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:51:44.739286 systemd-tmpfiles[1338]: Skipping /boot May 15 12:51:44.788311 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:51:44.788325 systemd-tmpfiles[1338]: Skipping /boot May 15 12:51:44.796501 zram_generator::config[1365]: No configuration found. May 15 12:51:44.899899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:51:44.974525 systemd[1]: Reloading finished in 264 ms. May 15 12:51:44.988415 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 12:51:45.003195 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:51:45.012389 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:51:45.016448 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 12:51:45.020633 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 12:51:45.027030 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:51:45.031449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:51:45.033146 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 12:51:45.037637 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:45.037800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:51:45.039413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:51:45.045516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:51:45.052110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:51:45.052835 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:51:45.052984 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:51:45.053395 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:45.058463 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 12:51:45.061931 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:45.063484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:51:45.063631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:51:45.063703 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:51:45.063775 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:45.070493 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:45.071501 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:51:45.075976 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:51:45.076694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:51:45.076797 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:51:45.077120 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:45.083636 systemd[1]: Finished ensure-sysext.service. May 15 12:51:45.090787 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 12:51:45.092159 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 12:51:45.106534 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 12:51:45.108824 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 12:51:45.110310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:51:45.110511 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:51:45.134736 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:51:45.135247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:51:45.137939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:51:45.139464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:51:45.140454 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:51:45.141187 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:51:45.144652 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:51:45.145802 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:51:45.150331 systemd-udevd[1414]: Using default interface naming scheme 'v255'. May 15 12:51:45.169676 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 12:51:45.173590 augenrules[1450]: No rules May 15 12:51:45.175308 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:51:45.175669 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:51:45.178799 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 12:51:45.192489 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 12:51:45.193648 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 12:51:45.206822 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:51:45.212285 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:51:45.376473 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 12:51:45.377134 systemd[1]: Reached target time-set.target - System Time Set. May 15 12:51:45.385324 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 12:51:45.400280 kernel: mousedev: PS/2 mouse device common for all mice May 15 12:51:45.418978 systemd-resolved[1413]: Positive Trust Anchors: May 15 12:51:45.418997 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:51:45.419024 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:51:45.430935 systemd-resolved[1413]: Defaulting to hostname 'linux'. May 15 12:51:45.434471 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:51:45.435493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:51:45.437312 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:51:45.438114 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 12:51:45.439338 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 12:51:45.440302 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 12:51:45.441460 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 12:51:45.442359 systemd-networkd[1467]: lo: Link UP May 15 12:51:45.442370 systemd-networkd[1467]: lo: Gained carrier May 15 12:51:45.442402 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 12:51:45.443899 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 12:51:45.443907 systemd-networkd[1467]: Enumeration completed May 15 12:51:45.444284 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:51:45.444288 systemd-networkd[1467]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:51:45.444501 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 12:51:45.444523 systemd[1]: Reached target paths.target - Path Units. May 15 12:51:45.444848 systemd-networkd[1467]: eth0: Link UP May 15 12:51:45.444994 systemd-networkd[1467]: eth0: Gained carrier May 15 12:51:45.445006 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:51:45.471397 systemd[1]: Reached target timers.target - Timer Units. May 15 12:51:45.473545 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 12:51:45.477992 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 12:51:45.482508 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 12:51:45.484978 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 12:51:45.485879 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 12:51:45.493384 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 12:51:45.495691 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 12:51:45.496798 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:51:45.497539 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 12:51:45.499777 systemd[1]: Reached target network.target - Network. May 15 12:51:45.500301 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:51:45.502305 systemd[1]: Reached target basic.target - Basic System. May 15 12:51:45.502852 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 12:51:45.502887 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 12:51:45.504450 systemd[1]: Starting containerd.service - containerd container runtime... May 15 12:51:45.507522 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 12:51:45.512276 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 12:51:45.513399 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 12:51:45.522683 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 12:51:45.526816 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 12:51:45.528700 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 12:51:45.529739 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 12:51:45.532119 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 12:51:45.536996 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 12:51:45.543980 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 12:51:45.560455 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 12:51:45.575304 kernel: ACPI: button: Power Button [PWRF] May 15 12:51:45.580125 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 12:51:45.590801 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 12:51:45.594402 jq[1509]: false May 15 12:51:45.598769 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 12:51:45.611667 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 12:51:45.612956 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 12:51:45.614447 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 12:51:45.619659 systemd[1]: Starting update-engine.service - Update Engine... May 15 12:51:45.624146 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Refreshing passwd entry cache May 15 12:51:45.627174 oslogin_cache_refresh[1511]: Refreshing passwd entry cache May 15 12:51:45.632436 extend-filesystems[1510]: Found loop4 May 15 12:51:45.632436 extend-filesystems[1510]: Found loop5 May 15 12:51:45.632436 extend-filesystems[1510]: Found loop6 May 15 12:51:45.632436 extend-filesystems[1510]: Found loop7 May 15 12:51:45.632436 extend-filesystems[1510]: Found sda May 15 12:51:45.632436 extend-filesystems[1510]: Found sda1 May 15 12:51:45.632436 extend-filesystems[1510]: Found sda2 May 15 12:51:45.632436 extend-filesystems[1510]: Found sda3 May 15 12:51:45.632436 extend-filesystems[1510]: Found usr May 15 12:51:45.632436 extend-filesystems[1510]: Found sda4 May 15 12:51:45.632436 extend-filesystems[1510]: Found sda6 May 15 12:51:45.632436 extend-filesystems[1510]: Found sda7 May 15 12:51:45.632436 extend-filesystems[1510]: Found sda9 May 15 12:51:45.627412 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 12:51:45.643438 oslogin_cache_refresh[1511]: Failure getting users, quitting May 15 12:51:45.722983 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Failure getting users, quitting May 15 12:51:45.722983 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:51:45.722983 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Refreshing group entry cache May 15 12:51:45.722983 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Failure getting groups, quitting May 15 12:51:45.722983 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:51:45.652457 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 12:51:45.643453 oslogin_cache_refresh[1511]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:51:45.653809 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 12:51:45.643497 oslogin_cache_refresh[1511]: Refreshing group entry cache May 15 12:51:45.654111 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 12:51:45.644032 oslogin_cache_refresh[1511]: Failure getting groups, quitting May 15 12:51:45.656613 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 12:51:45.644041 oslogin_cache_refresh[1511]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:51:45.656907 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 12:51:45.658807 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 12:51:45.733389 jq[1527]: true May 15 12:51:45.659888 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 12:51:45.662246 systemd[1]: motdgen.service: Deactivated successfully. May 15 12:51:45.663969 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 12:51:45.667949 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 12:51:45.669519 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 12:51:45.715104 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 12:51:45.767936 update_engine[1526]: I20250515 12:51:45.762213 1526 main.cc:92] Flatcar Update Engine starting May 15 12:51:45.786286 coreos-metadata[1505]: May 15 12:51:45.786 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 12:51:45.789564 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 12:51:45.801536 jq[1547]: true May 15 12:51:45.808444 dbus-daemon[1506]: [system] SELinux support is enabled May 15 12:51:45.808607 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 12:51:45.813288 tar[1535]: linux-amd64/helm May 15 12:51:45.814239 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 12:51:45.816309 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 12:51:45.816937 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 12:51:45.816955 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 12:51:45.831365 systemd[1]: Started update-engine.service - Update Engine. May 15 12:51:45.833762 update_engine[1526]: I20250515 12:51:45.833622 1526 update_check_scheduler.cc:74] Next update check in 5m42s May 15 12:51:45.837717 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 12:51:45.851278 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 12:51:45.855837 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 12:51:45.888437 systemd-logind[1520]: New seat seat0. May 15 12:51:45.890818 systemd[1]: Started systemd-logind.service - User Login Management. May 15 12:51:45.924712 bash[1574]: Updated "/home/core/.ssh/authorized_keys" May 15 12:51:45.925248 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 12:51:45.932517 systemd[1]: Starting sshkeys.service... May 15 12:51:45.964707 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 12:51:45.968563 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 12:51:46.028434 systemd-networkd[1467]: eth0: DHCPv4 address 172.232.9.197/24, gateway 172.232.9.1 acquired from 23.205.167.18 May 15 12:51:46.029910 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. May 15 12:51:46.031234 dbus-daemon[1506]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1467 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 15 12:51:46.038472 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 15 12:51:46.045483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 12:51:46.050663 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 12:51:46.051674 containerd[1536]: time="2025-05-15T12:51:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 12:51:46.067798 containerd[1536]: time="2025-05-15T12:51:46.067766450Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 12:51:46.112196 containerd[1536]: time="2025-05-15T12:51:46.112155712Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.81µs" May 15 12:51:46.112196 containerd[1536]: time="2025-05-15T12:51:46.112187182Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 12:51:46.112343 containerd[1536]: time="2025-05-15T12:51:46.112205122Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 12:51:46.113404 containerd[1536]: time="2025-05-15T12:51:46.113374513Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 12:51:46.113404 containerd[1536]: time="2025-05-15T12:51:46.113400053Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 12:51:46.113457 containerd[1536]: time="2025-05-15T12:51:46.113423053Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:51:46.113509 containerd[1536]: time="2025-05-15T12:51:46.113483493Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:51:46.113509 containerd[1536]: time="2025-05-15T12:51:46.113501543Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:51:46.113747 containerd[1536]: time="2025-05-15T12:51:46.113719003Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:51:46.113747 containerd[1536]: time="2025-05-15T12:51:46.113740613Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:51:46.113792 containerd[1536]: time="2025-05-15T12:51:46.113751523Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:51:46.113792 containerd[1536]: time="2025-05-15T12:51:46.113759433Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 12:51:46.115318 containerd[1536]: time="2025-05-15T12:51:46.113844083Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 12:51:46.115318 containerd[1536]: time="2025-05-15T12:51:46.114064323Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:51:46.115318 containerd[1536]: time="2025-05-15T12:51:46.114094403Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:51:46.115318 containerd[1536]: time="2025-05-15T12:51:46.114103433Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 12:51:46.121500 containerd[1536]: time="2025-05-15T12:51:46.121465947Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 12:51:46.121788 containerd[1536]: time="2025-05-15T12:51:46.121761817Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 12:51:46.121864 containerd[1536]: time="2025-05-15T12:51:46.121839707Z" level=info msg="metadata content store policy set" policy=shared May 15 12:51:46.125623 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 12:51:46.128606 containerd[1536]: time="2025-05-15T12:51:46.128569080Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 12:51:46.128652 containerd[1536]: time="2025-05-15T12:51:46.128639350Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 12:51:46.128673 containerd[1536]: time="2025-05-15T12:51:46.128655000Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 12:51:46.128673 containerd[1536]: time="2025-05-15T12:51:46.128665860Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 12:51:46.128739 containerd[1536]: time="2025-05-15T12:51:46.128717100Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 12:51:46.128772 containerd[1536]: time="2025-05-15T12:51:46.128751270Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 12:51:46.128772 containerd[1536]: time="2025-05-15T12:51:46.128763830Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 12:51:46.128805 containerd[1536]: time="2025-05-15T12:51:46.128773280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 12:51:46.128805 containerd[1536]: time="2025-05-15T12:51:46.128802870Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 12:51:46.128837 containerd[1536]: time="2025-05-15T12:51:46.128812710Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 12:51:46.128837 containerd[1536]: time="2025-05-15T12:51:46.128821090Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 12:51:46.128837 containerd[1536]: time="2025-05-15T12:51:46.128831560Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 12:51:46.130484 containerd[1536]: time="2025-05-15T12:51:46.130452261Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 12:51:46.130520 containerd[1536]: time="2025-05-15T12:51:46.130502981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 12:51:46.130540 containerd[1536]: time="2025-05-15T12:51:46.130519601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 12:51:46.130540 containerd[1536]: time="2025-05-15T12:51:46.130529731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 12:51:46.130540 containerd[1536]: time="2025-05-15T12:51:46.130539241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 12:51:46.130586 containerd[1536]: time="2025-05-15T12:51:46.130548261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 12:51:46.131293 containerd[1536]: time="2025-05-15T12:51:46.130559231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 12:51:46.131293 containerd[1536]: time="2025-05-15T12:51:46.131291322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 12:51:46.131343 containerd[1536]: time="2025-05-15T12:51:46.131304102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 12:51:46.131343 containerd[1536]: time="2025-05-15T12:51:46.131313832Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 12:51:46.131343 containerd[1536]: time="2025-05-15T12:51:46.131323232Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 12:51:46.131402 containerd[1536]: time="2025-05-15T12:51:46.131391622Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 12:51:46.131421 containerd[1536]: time="2025-05-15T12:51:46.131405772Z" level=info msg="Start snapshots syncer" May 15 12:51:46.131541 containerd[1536]: time="2025-05-15T12:51:46.131515512Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 12:51:46.132735 containerd[1536]: time="2025-05-15T12:51:46.132690912Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 12:51:46.132839 containerd[1536]: time="2025-05-15T12:51:46.132747442Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 12:51:46.132839 containerd[1536]: time="2025-05-15T12:51:46.132820712Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 12:51:46.137920 containerd[1536]: time="2025-05-15T12:51:46.137884145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 12:51:46.137920 containerd[1536]: time="2025-05-15T12:51:46.137918485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 12:51:46.138244 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:51:46.142236 containerd[1536]: time="2025-05-15T12:51:46.142205877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 12:51:46.142320 containerd[1536]: time="2025-05-15T12:51:46.142238217Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 12:51:46.142320 containerd[1536]: time="2025-05-15T12:51:46.142278907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 12:51:46.142320 containerd[1536]: time="2025-05-15T12:51:46.142295207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 12:51:46.142320 containerd[1536]: time="2025-05-15T12:51:46.142305137Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 12:51:46.142389 containerd[1536]: time="2025-05-15T12:51:46.142327487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 12:51:46.142389 containerd[1536]: time="2025-05-15T12:51:46.142359367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 12:51:46.142389 containerd[1536]: time="2025-05-15T12:51:46.142369967Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 12:51:46.143904 containerd[1536]: time="2025-05-15T12:51:46.143874468Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:51:46.143904 containerd[1536]: time="2025-05-15T12:51:46.143901368Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:51:46.143957 containerd[1536]: time="2025-05-15T12:51:46.143911078Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:51:46.143957 containerd[1536]: time="2025-05-15T12:51:46.143931358Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:51:46.143957 containerd[1536]: time="2025-05-15T12:51:46.143941238Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 12:51:46.144007 containerd[1536]: time="2025-05-15T12:51:46.143961858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 12:51:46.144007 containerd[1536]: time="2025-05-15T12:51:46.143972848Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 12:51:46.144007 containerd[1536]: time="2025-05-15T12:51:46.143988858Z" level=info msg="runtime interface created" May 15 12:51:46.144007 containerd[1536]: time="2025-05-15T12:51:46.143994008Z" level=info msg="created NRI interface" May 15 12:51:46.144007 containerd[1536]: time="2025-05-15T12:51:46.144001368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 12:51:46.144319 containerd[1536]: time="2025-05-15T12:51:46.144012138Z" level=info msg="Connect containerd service" May 15 12:51:46.144319 containerd[1536]: time="2025-05-15T12:51:46.144228138Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 12:51:46.144355 coreos-metadata[1581]: May 15 12:51:46.144 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 12:51:46.151108 containerd[1536]: time="2025-05-15T12:51:46.150857571Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:51:46.233315 kernel: EDAC MC: Ver: 3.0.0 May 15 12:51:46.248197 coreos-metadata[1581]: May 15 12:51:46.248 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 15 12:51:46.327806 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 15 12:51:46.333358 dbus-daemon[1506]: [system] Successfully activated service 'org.freedesktop.hostname1' May 15 12:51:46.333796 dbus-daemon[1506]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1592 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 15 12:51:46.339150 systemd[1]: Starting polkit.service - Authorization Manager... May 15 12:51:46.349378 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 12:51:46.393416 locksmithd[1554]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 12:51:46.399741 sshd_keygen[1534]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 12:51:46.407158 coreos-metadata[1581]: May 15 12:51:46.406 INFO Fetch successful May 15 12:51:46.422081 systemd-logind[1520]: Watching system buttons on /dev/input/event2 (Power Button) May 15 12:51:46.437405 containerd[1536]: time="2025-05-15T12:51:46.436721144Z" level=info msg="Start subscribing containerd event" May 15 12:51:46.437405 containerd[1536]: time="2025-05-15T12:51:46.436762084Z" level=info msg="Start recovering state" May 15 12:51:46.437405 containerd[1536]: time="2025-05-15T12:51:46.436839854Z" level=info msg="Start event monitor" May 15 12:51:46.437405 containerd[1536]: time="2025-05-15T12:51:46.436852034Z" level=info msg="Start cni network conf syncer for default" May 15 12:51:46.437405 containerd[1536]: time="2025-05-15T12:51:46.436858174Z" level=info msg="Start streaming server" May 15 12:51:46.437405 containerd[1536]: time="2025-05-15T12:51:46.436871394Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 12:51:46.437405 containerd[1536]: time="2025-05-15T12:51:46.436878004Z" level=info msg="runtime interface starting up..." May 15 12:51:46.437405 containerd[1536]: time="2025-05-15T12:51:46.436883094Z" level=info msg="starting plugins..." May 15 12:51:46.437405 containerd[1536]: time="2025-05-15T12:51:46.437079934Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 12:51:46.441350 containerd[1536]: time="2025-05-15T12:51:46.440490786Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 12:51:46.441350 containerd[1536]: time="2025-05-15T12:51:46.441313476Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 12:51:46.446097 systemd[1]: Started containerd.service - containerd container runtime. May 15 12:51:46.446222 containerd[1536]: time="2025-05-15T12:51:46.446206519Z" level=info msg="containerd successfully booted in 0.395935s" May 15 12:51:46.496523 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 12:51:46.498644 update-ssh-keys[1629]: Updated "/home/core/.ssh/authorized_keys" May 15 12:51:46.514577 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 12:51:46.554354 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 12:51:46.594528 systemd[1]: Finished sshkeys.service. May 15 12:51:46.600269 polkitd[1617]: Started polkitd version 126 May 15 12:51:46.608933 polkitd[1617]: Loading rules from directory /etc/polkit-1/rules.d May 15 12:51:46.609189 polkitd[1617]: Loading rules from directory /run/polkit-1/rules.d May 15 12:51:46.609222 polkitd[1617]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 15 12:51:46.609460 polkitd[1617]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 15 12:51:46.609481 polkitd[1617]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 15 12:51:46.609513 polkitd[1617]: Loading rules from directory /usr/share/polkit-1/rules.d May 15 12:51:46.611245 polkitd[1617]: Finished loading, compiling and executing 2 rules May 15 12:51:46.611922 systemd[1]: issuegen.service: Deactivated successfully. May 15 12:51:46.612809 dbus-daemon[1506]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 15 12:51:46.613120 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 12:51:46.613572 polkitd[1617]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 15 12:51:46.635520 systemd[1]: Started polkit.service - Authorization Manager. May 15 12:51:46.657520 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 12:51:46.662392 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:51:46.662903 systemd-hostnamed[1592]: Hostname set to <172-232-9-197> (transient) May 15 12:51:46.664771 systemd-resolved[1413]: System hostname changed to '172-232-9-197'. May 15 12:51:46.677654 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 12:51:46.681509 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 12:51:46.685504 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 12:51:46.686183 systemd[1]: Reached target getty.target - Login Prompts. May 15 12:51:46.783035 tar[1535]: linux-amd64/LICENSE May 15 12:51:46.783276 tar[1535]: linux-amd64/README.md May 15 12:51:46.802368 coreos-metadata[1505]: May 15 12:51:46.802 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 15 12:51:46.806112 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 12:51:46.892521 coreos-metadata[1505]: May 15 12:51:46.892 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 15 12:51:46.947419 systemd-networkd[1467]: eth0: Gained IPv6LL May 15 12:51:46.948040 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. May 15 12:51:46.950577 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 12:51:46.951845 systemd[1]: Reached target network-online.target - Network is Online. May 15 12:51:46.955198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:51:46.958439 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 12:51:46.986464 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 12:51:47.075720 coreos-metadata[1505]: May 15 12:51:47.075 INFO Fetch successful May 15 12:51:47.075845 coreos-metadata[1505]: May 15 12:51:47.075 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 15 12:51:47.394874 coreos-metadata[1505]: May 15 12:51:47.394 INFO Fetch successful May 15 12:51:47.488215 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. May 15 12:51:47.489494 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 12:51:47.490948 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 12:51:47.751999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:51:47.752928 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 12:51:47.790111 systemd[1]: Startup finished in 3.079s (kernel) + 7.264s (initrd) + 4.912s (userspace) = 15.256s. May 15 12:51:47.798580 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:51:48.311212 kubelet[1694]: E0515 12:51:48.311135 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:51:48.314530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:51:48.314709 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:51:48.315072 systemd[1]: kubelet.service: Consumed 800ms CPU time, 235.1M memory peak. May 15 12:51:48.932814 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. May 15 12:51:50.780088 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 12:51:50.781161 systemd[1]: Started sshd@0-172.232.9.197:22-139.178.89.65:40684.service - OpenSSH per-connection server daemon (139.178.89.65:40684). May 15 12:51:51.129296 sshd[1706]: Accepted publickey for core from 139.178.89.65 port 40684 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:51:51.130998 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:51:51.138014 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 12:51:51.139397 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 12:51:51.147651 systemd-logind[1520]: New session 1 of user core. May 15 12:51:51.161547 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 12:51:51.164721 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 12:51:51.175818 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 12:51:51.178345 systemd-logind[1520]: New session c1 of user core. May 15 12:51:51.313600 systemd[1710]: Queued start job for default target default.target. May 15 12:51:51.324535 systemd[1710]: Created slice app.slice - User Application Slice. May 15 12:51:51.324563 systemd[1710]: Reached target paths.target - Paths. May 15 12:51:51.324692 systemd[1710]: Reached target timers.target - Timers. May 15 12:51:51.326159 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 12:51:51.336914 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 12:51:51.337042 systemd[1710]: Reached target sockets.target - Sockets. May 15 12:51:51.337087 systemd[1710]: Reached target basic.target - Basic System. May 15 12:51:51.337340 systemd[1710]: Reached target default.target - Main User Target. May 15 12:51:51.337370 systemd[1710]: Startup finished in 152ms. May 15 12:51:51.337764 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 12:51:51.351401 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 12:51:51.612374 systemd[1]: Started sshd@1-172.232.9.197:22-139.178.89.65:40686.service - OpenSSH per-connection server daemon (139.178.89.65:40686). May 15 12:51:51.945828 sshd[1721]: Accepted publickey for core from 139.178.89.65 port 40686 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:51:51.947695 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:51:51.952783 systemd-logind[1520]: New session 2 of user core. May 15 12:51:51.959365 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 12:51:52.190510 sshd[1723]: Connection closed by 139.178.89.65 port 40686 May 15 12:51:52.191291 sshd-session[1721]: pam_unix(sshd:session): session closed for user core May 15 12:51:52.196115 systemd[1]: sshd@1-172.232.9.197:22-139.178.89.65:40686.service: Deactivated successfully. May 15 12:51:52.198297 systemd[1]: session-2.scope: Deactivated successfully. May 15 12:51:52.199512 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. May 15 12:51:52.201218 systemd-logind[1520]: Removed session 2. May 15 12:51:52.253591 systemd[1]: Started sshd@2-172.232.9.197:22-139.178.89.65:40688.service - OpenSSH per-connection server daemon (139.178.89.65:40688). May 15 12:51:52.601580 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 40688 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:51:52.602643 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:51:52.607841 systemd-logind[1520]: New session 3 of user core. May 15 12:51:52.614372 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 12:51:52.848271 sshd[1731]: Connection closed by 139.178.89.65 port 40688 May 15 12:51:52.848816 sshd-session[1729]: pam_unix(sshd:session): session closed for user core May 15 12:51:52.852904 systemd[1]: sshd@2-172.232.9.197:22-139.178.89.65:40688.service: Deactivated successfully. May 15 12:51:52.855090 systemd[1]: session-3.scope: Deactivated successfully. May 15 12:51:52.856440 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. May 15 12:51:52.857802 systemd-logind[1520]: Removed session 3. May 15 12:51:52.913232 systemd[1]: Started sshd@3-172.232.9.197:22-139.178.89.65:40704.service - OpenSSH per-connection server daemon (139.178.89.65:40704). May 15 12:51:53.257028 sshd[1737]: Accepted publickey for core from 139.178.89.65 port 40704 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:51:53.258622 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:51:53.265272 systemd-logind[1520]: New session 4 of user core. May 15 12:51:53.273371 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 12:51:53.506548 sshd[1739]: Connection closed by 139.178.89.65 port 40704 May 15 12:51:53.507054 sshd-session[1737]: pam_unix(sshd:session): session closed for user core May 15 12:51:53.511092 systemd[1]: sshd@3-172.232.9.197:22-139.178.89.65:40704.service: Deactivated successfully. May 15 12:51:53.512967 systemd[1]: session-4.scope: Deactivated successfully. May 15 12:51:53.513812 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. May 15 12:51:53.514921 systemd-logind[1520]: Removed session 4. May 15 12:51:53.571364 systemd[1]: Started sshd@4-172.232.9.197:22-139.178.89.65:40706.service - OpenSSH per-connection server daemon (139.178.89.65:40706). May 15 12:51:53.916798 sshd[1745]: Accepted publickey for core from 139.178.89.65 port 40706 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:51:53.918760 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:51:53.924911 systemd-logind[1520]: New session 5 of user core. May 15 12:51:53.931377 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 12:51:54.125732 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 12:51:54.126220 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:51:54.138073 sudo[1748]: pam_unix(sudo:session): session closed for user root May 15 12:51:54.190346 sshd[1747]: Connection closed by 139.178.89.65 port 40706 May 15 12:51:54.191347 sshd-session[1745]: pam_unix(sshd:session): session closed for user core May 15 12:51:54.195540 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. May 15 12:51:54.196076 systemd[1]: sshd@4-172.232.9.197:22-139.178.89.65:40706.service: Deactivated successfully. May 15 12:51:54.197951 systemd[1]: session-5.scope: Deactivated successfully. May 15 12:51:54.199923 systemd-logind[1520]: Removed session 5. May 15 12:51:54.264939 systemd[1]: Started sshd@5-172.232.9.197:22-139.178.89.65:40708.service - OpenSSH per-connection server daemon (139.178.89.65:40708). May 15 12:51:54.609773 sshd[1754]: Accepted publickey for core from 139.178.89.65 port 40708 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:51:54.611113 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:51:54.615800 systemd-logind[1520]: New session 6 of user core. May 15 12:51:54.621367 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 12:51:54.811047 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 12:51:54.811425 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:51:54.817327 sudo[1758]: pam_unix(sudo:session): session closed for user root May 15 12:51:54.823692 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 12:51:54.824138 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:51:54.834481 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:51:54.881546 augenrules[1780]: No rules May 15 12:51:54.883628 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:51:54.883885 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:51:54.885350 sudo[1757]: pam_unix(sudo:session): session closed for user root May 15 12:51:54.937408 sshd[1756]: Connection closed by 139.178.89.65 port 40708 May 15 12:51:54.937892 sshd-session[1754]: pam_unix(sshd:session): session closed for user core May 15 12:51:54.943475 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. May 15 12:51:54.943639 systemd[1]: sshd@5-172.232.9.197:22-139.178.89.65:40708.service: Deactivated successfully. May 15 12:51:54.946014 systemd[1]: session-6.scope: Deactivated successfully. May 15 12:51:54.947776 systemd-logind[1520]: Removed session 6. May 15 12:51:54.998373 systemd[1]: Started sshd@6-172.232.9.197:22-139.178.89.65:40714.service - OpenSSH per-connection server daemon (139.178.89.65:40714). May 15 12:51:55.343780 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 40714 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:51:55.345120 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:51:55.349121 systemd-logind[1520]: New session 7 of user core. May 15 12:51:55.356487 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 12:51:55.545962 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 12:51:55.546275 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:51:55.823467 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 12:51:55.839545 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 12:51:56.037748 dockerd[1809]: time="2025-05-15T12:51:56.037410321Z" level=info msg="Starting up" May 15 12:51:56.038818 dockerd[1809]: time="2025-05-15T12:51:56.038795922Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 12:51:56.094126 dockerd[1809]: time="2025-05-15T12:51:56.093730869Z" level=info msg="Loading containers: start." May 15 12:51:56.104285 kernel: Initializing XFRM netlink socket May 15 12:51:56.306659 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. May 15 12:51:56.353158 systemd-networkd[1467]: docker0: Link UP May 15 12:51:56.355847 dockerd[1809]: time="2025-05-15T12:51:56.355800650Z" level=info msg="Loading containers: done." May 15 12:51:56.367968 dockerd[1809]: time="2025-05-15T12:51:56.367928936Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 12:51:56.368304 dockerd[1809]: time="2025-05-15T12:51:56.368013516Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 12:51:56.371493 dockerd[1809]: time="2025-05-15T12:51:56.368379357Z" level=info msg="Initializing buildkit" May 15 12:51:56.369779 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1664712788-merged.mount: Deactivated successfully. May 15 12:51:56.388758 dockerd[1809]: time="2025-05-15T12:51:56.388731277Z" level=info msg="Completed buildkit initialization" May 15 12:51:56.395034 dockerd[1809]: time="2025-05-15T12:51:56.395006080Z" level=info msg="Daemon has completed initialization" May 15 12:51:56.395156 dockerd[1809]: time="2025-05-15T12:51:56.395123350Z" level=info msg="API listen on /run/docker.sock" May 15 12:51:56.395214 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 12:51:57.217739 systemd-resolved[1413]: Clock change detected. Flushing caches. May 15 12:51:57.218344 systemd-timesyncd[1430]: Contacted time server [2607:ff50:0:20::5ca1:ab1e]:123 (2.flatcar.pool.ntp.org). May 15 12:51:57.218603 systemd-timesyncd[1430]: Initial clock synchronization to Thu 2025-05-15 12:51:57.217497 UTC. May 15 12:51:58.161091 containerd[1536]: time="2025-05-15T12:51:58.161039013Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 12:51:58.872377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4219424281.mount: Deactivated successfully. May 15 12:51:59.073005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 12:51:59.075439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:51:59.257990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:51:59.264728 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:51:59.309593 kubelet[2069]: E0515 12:51:59.309513 2069 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:51:59.314362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:51:59.314579 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:51:59.314917 systemd[1]: kubelet.service: Consumed 180ms CPU time, 96.3M memory peak. May 15 12:51:59.972963 containerd[1536]: time="2025-05-15T12:51:59.972906249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:51:59.974130 containerd[1536]: time="2025-05-15T12:51:59.974041839Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 15 12:51:59.974905 containerd[1536]: time="2025-05-15T12:51:59.974874870Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:51:59.978483 containerd[1536]: time="2025-05-15T12:51:59.978114601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:51:59.980672 containerd[1536]: time="2025-05-15T12:51:59.980649123Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.81956688s" May 15 12:51:59.980761 containerd[1536]: time="2025-05-15T12:51:59.980745083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 15 12:51:59.982899 containerd[1536]: time="2025-05-15T12:51:59.982867644Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 12:52:01.277001 containerd[1536]: time="2025-05-15T12:52:01.276537790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:01.277377 containerd[1536]: time="2025-05-15T12:52:01.277352270Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 15 12:52:01.277884 containerd[1536]: time="2025-05-15T12:52:01.277864911Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:01.279945 containerd[1536]: time="2025-05-15T12:52:01.279924382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:01.280910 containerd[1536]: time="2025-05-15T12:52:01.280882322Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.297983388s" May 15 12:52:01.280954 containerd[1536]: time="2025-05-15T12:52:01.280912262Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 15 12:52:01.281634 containerd[1536]: time="2025-05-15T12:52:01.281608773Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 12:52:02.450818 containerd[1536]: time="2025-05-15T12:52:02.450776177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:02.451761 containerd[1536]: time="2025-05-15T12:52:02.451736147Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 15 12:52:02.452736 containerd[1536]: time="2025-05-15T12:52:02.452318328Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:02.454271 containerd[1536]: time="2025-05-15T12:52:02.454249599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:02.455161 containerd[1536]: time="2025-05-15T12:52:02.455133749Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.173497986s" May 15 12:52:02.455203 containerd[1536]: time="2025-05-15T12:52:02.455162759Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 15 12:52:02.455821 containerd[1536]: time="2025-05-15T12:52:02.455799409Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 12:52:03.590908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410081517.mount: Deactivated successfully. May 15 12:52:03.926733 containerd[1536]: time="2025-05-15T12:52:03.926568344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:03.927428 containerd[1536]: time="2025-05-15T12:52:03.927143774Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 15 12:52:03.929028 containerd[1536]: time="2025-05-15T12:52:03.928011105Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:03.929446 containerd[1536]: time="2025-05-15T12:52:03.929415936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:03.930020 containerd[1536]: time="2025-05-15T12:52:03.929989306Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.474166217s" May 15 12:52:03.930088 containerd[1536]: time="2025-05-15T12:52:03.930074206Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 15 12:52:03.930663 containerd[1536]: time="2025-05-15T12:52:03.930636986Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 12:52:04.532488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203644876.mount: Deactivated successfully. May 15 12:52:05.174630 containerd[1536]: time="2025-05-15T12:52:05.174577988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:05.175808 containerd[1536]: time="2025-05-15T12:52:05.175577908Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 12:52:05.176317 containerd[1536]: time="2025-05-15T12:52:05.176285059Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:05.179038 containerd[1536]: time="2025-05-15T12:52:05.179006870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:05.179983 containerd[1536]: time="2025-05-15T12:52:05.179947850Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.249286084s" May 15 12:52:05.180029 containerd[1536]: time="2025-05-15T12:52:05.179983010Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 12:52:05.181014 containerd[1536]: time="2025-05-15T12:52:05.180963011Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 12:52:05.743209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997514846.mount: Deactivated successfully. May 15 12:52:05.748712 containerd[1536]: time="2025-05-15T12:52:05.748664215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:52:05.749678 containerd[1536]: time="2025-05-15T12:52:05.749437515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 12:52:05.750207 containerd[1536]: time="2025-05-15T12:52:05.750151665Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:52:05.752357 containerd[1536]: time="2025-05-15T12:52:05.752313946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:52:05.753342 containerd[1536]: time="2025-05-15T12:52:05.752785277Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 571.689646ms" May 15 12:52:05.753342 containerd[1536]: time="2025-05-15T12:52:05.752815607Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 12:52:05.753875 containerd[1536]: time="2025-05-15T12:52:05.753851027Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 12:52:06.362903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2773454022.mount: Deactivated successfully. May 15 12:52:09.186333 containerd[1536]: time="2025-05-15T12:52:09.186262962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:09.187263 containerd[1536]: time="2025-05-15T12:52:09.187177873Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 15 12:52:09.188487 containerd[1536]: time="2025-05-15T12:52:09.187947983Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:09.190037 containerd[1536]: time="2025-05-15T12:52:09.190001174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:09.191919 containerd[1536]: time="2025-05-15T12:52:09.191044005Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.437169888s" May 15 12:52:09.191919 containerd[1536]: time="2025-05-15T12:52:09.191082965Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 12:52:09.323033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 12:52:09.324833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:52:09.489017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:09.498733 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:52:09.537354 kubelet[2223]: E0515 12:52:09.537323 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:52:09.542156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:52:09.542594 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:52:09.543655 systemd[1]: kubelet.service: Consumed 165ms CPU time, 93.5M memory peak. May 15 12:52:10.999256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:10.999717 systemd[1]: kubelet.service: Consumed 165ms CPU time, 93.5M memory peak. May 15 12:52:11.001647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:52:11.029848 systemd[1]: Reload requested from client PID 2240 ('systemctl') (unit session-7.scope)... May 15 12:52:11.029861 systemd[1]: Reloading... May 15 12:52:11.169575 zram_generator::config[2290]: No configuration found. May 15 12:52:11.255654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:52:11.364431 systemd[1]: Reloading finished in 334 ms. May 15 12:52:11.417103 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 12:52:11.417198 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 12:52:11.417693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:11.417730 systemd[1]: kubelet.service: Consumed 122ms CPU time, 83.6M memory peak. May 15 12:52:11.419246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:52:11.577780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:11.583722 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:52:11.641525 kubelet[2338]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:52:11.641525 kubelet[2338]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 12:52:11.641525 kubelet[2338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:52:11.641829 kubelet[2338]: I0515 12:52:11.641571 2338 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:52:11.845851 kubelet[2338]: I0515 12:52:11.845481 2338 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 12:52:11.845851 kubelet[2338]: I0515 12:52:11.845511 2338 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:52:11.845851 kubelet[2338]: I0515 12:52:11.845714 2338 server.go:929] "Client rotation is on, will bootstrap in background" May 15 12:52:11.875681 kubelet[2338]: I0515 12:52:11.875659 2338 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:52:11.876144 kubelet[2338]: E0515 12:52:11.876105 2338 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.232.9.197:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.9.197:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:11.882760 kubelet[2338]: I0515 12:52:11.882734 2338 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:52:11.888614 kubelet[2338]: I0515 12:52:11.888582 2338 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:52:11.889776 kubelet[2338]: I0515 12:52:11.889750 2338 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 12:52:11.889915 kubelet[2338]: I0515 12:52:11.889891 2338 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:52:11.890076 kubelet[2338]: I0515 12:52:11.889913 2338 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-9-197","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:52:11.890185 kubelet[2338]: I0515 12:52:11.890089 2338 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:52:11.890185 kubelet[2338]: I0515 12:52:11.890099 2338 container_manager_linux.go:300] "Creating device plugin manager" May 15 12:52:11.890260 kubelet[2338]: I0515 12:52:11.890190 2338 state_mem.go:36] "Initialized new in-memory state store" May 15 12:52:11.892497 kubelet[2338]: I0515 12:52:11.892153 2338 kubelet.go:408] "Attempting to sync node with API server" May 15 12:52:11.892497 kubelet[2338]: I0515 12:52:11.892174 2338 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:52:11.892497 kubelet[2338]: I0515 12:52:11.892215 2338 kubelet.go:314] "Adding apiserver pod source" May 15 12:52:11.892497 kubelet[2338]: I0515 12:52:11.892231 2338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:52:11.898170 kubelet[2338]: W0515 12:52:11.898014 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.232.9.197:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-9-197&limit=500&resourceVersion=0": dial tcp 172.232.9.197:6443: connect: connection refused May 15 12:52:11.898170 kubelet[2338]: E0515 12:52:11.898064 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.232.9.197:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-9-197&limit=500&resourceVersion=0\": dial tcp 172.232.9.197:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:11.898350 kubelet[2338]: W0515 12:52:11.898317 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.232.9.197:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.232.9.197:6443: connect: connection refused May 15 12:52:11.898384 kubelet[2338]: E0515 12:52:11.898351 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.232.9.197:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.9.197:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:11.898692 kubelet[2338]: I0515 12:52:11.898673 2338 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:52:11.900417 kubelet[2338]: I0515 12:52:11.900389 2338 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:52:11.900488 kubelet[2338]: W0515 12:52:11.900473 2338 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 12:52:11.903830 kubelet[2338]: I0515 12:52:11.903684 2338 server.go:1269] "Started kubelet" May 15 12:52:11.904261 kubelet[2338]: I0515 12:52:11.904222 2338 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:52:11.907184 kubelet[2338]: I0515 12:52:11.906661 2338 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:52:11.907184 kubelet[2338]: I0515 12:52:11.906785 2338 server.go:460] "Adding debug handlers to kubelet server" May 15 12:52:11.907184 kubelet[2338]: I0515 12:52:11.906961 2338 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:52:11.908697 kubelet[2338]: I0515 12:52:11.908489 2338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:52:11.914092 kubelet[2338]: E0515 12:52:11.911741 2338 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.9.197:6443/api/v1/namespaces/default/events\": dial tcp 172.232.9.197:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-9-197.183fb4664c59f04c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-9-197,UID:172-232-9-197,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-9-197,},FirstTimestamp:2025-05-15 12:52:11.9036683 +0000 UTC m=+0.316640639,LastTimestamp:2025-05-15 12:52:11.9036683 +0000 UTC m=+0.316640639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-9-197,}" May 15 12:52:11.914872 kubelet[2338]: I0515 12:52:11.914788 2338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:52:11.915711 kubelet[2338]: E0515 12:52:11.915439 2338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-9-197\" not found" May 15 12:52:11.915994 kubelet[2338]: I0515 12:52:11.915982 2338 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 12:52:11.916183 kubelet[2338]: I0515 12:52:11.916168 2338 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 12:52:11.916286 kubelet[2338]: I0515 12:52:11.916271 2338 reconciler.go:26] "Reconciler: start to sync state" May 15 12:52:11.916703 kubelet[2338]: W0515 12:52:11.916650 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.9.197:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.9.197:6443: connect: connection refused May 15 12:52:11.916820 kubelet[2338]: E0515 12:52:11.916804 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.232.9.197:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.9.197:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:11.916993 kubelet[2338]: I0515 12:52:11.916978 2338 factory.go:221] Registration of the systemd container factory successfully May 15 12:52:11.917111 kubelet[2338]: I0515 12:52:11.917095 2338 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:52:11.918548 kubelet[2338]: E0515 12:52:11.918534 2338 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:52:11.918842 kubelet[2338]: I0515 12:52:11.918829 2338 factory.go:221] Registration of the containerd container factory successfully May 15 12:52:11.924183 kubelet[2338]: E0515 12:52:11.924161 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.9.197:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-9-197?timeout=10s\": dial tcp 172.232.9.197:6443: connect: connection refused" interval="200ms" May 15 12:52:11.926866 kubelet[2338]: I0515 12:52:11.926751 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:52:11.927895 kubelet[2338]: I0515 12:52:11.927875 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:52:11.927940 kubelet[2338]: I0515 12:52:11.927904 2338 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 12:52:11.927940 kubelet[2338]: I0515 12:52:11.927917 2338 kubelet.go:2321] "Starting kubelet main sync loop" May 15 12:52:11.927992 kubelet[2338]: E0515 12:52:11.927958 2338 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:52:11.937357 kubelet[2338]: W0515 12:52:11.937319 2338 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.232.9.197:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.9.197:6443: connect: connection refused May 15 12:52:11.937444 kubelet[2338]: E0515 12:52:11.937371 2338 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.232.9.197:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.9.197:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:11.950486 kubelet[2338]: I0515 12:52:11.950406 2338 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 12:52:11.950486 kubelet[2338]: I0515 12:52:11.950420 2338 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 12:52:11.950486 kubelet[2338]: I0515 12:52:11.950433 2338 state_mem.go:36] "Initialized new in-memory state store" May 15 12:52:11.951914 kubelet[2338]: I0515 12:52:11.951857 2338 policy_none.go:49] "None policy: Start" May 15 12:52:11.952483 kubelet[2338]: I0515 12:52:11.952319 2338 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 12:52:11.952483 kubelet[2338]: I0515 12:52:11.952336 2338 state_mem.go:35] "Initializing new in-memory state store" May 15 12:52:11.959370 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 12:52:11.969228 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 12:52:11.972980 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 12:52:11.981145 kubelet[2338]: I0515 12:52:11.981127 2338 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:52:11.981695 kubelet[2338]: I0515 12:52:11.981478 2338 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:52:11.981731 kubelet[2338]: I0515 12:52:11.981682 2338 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:52:11.982319 kubelet[2338]: I0515 12:52:11.981958 2338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:52:11.983927 kubelet[2338]: E0515 12:52:11.983916 2338 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-9-197\" not found" May 15 12:52:12.038849 systemd[1]: Created slice kubepods-burstable-pode12bced349590305f32ab53204afac2a.slice - libcontainer container kubepods-burstable-pode12bced349590305f32ab53204afac2a.slice. May 15 12:52:12.064589 systemd[1]: Created slice kubepods-burstable-pod7cd367d19d62ce7e23d64221a34a0937.slice - libcontainer container kubepods-burstable-pod7cd367d19d62ce7e23d64221a34a0937.slice. May 15 12:52:12.079559 systemd[1]: Created slice kubepods-burstable-pod4e668700df5eb478e4ddee5a7cf86272.slice - libcontainer container kubepods-burstable-pod4e668700df5eb478e4ddee5a7cf86272.slice. May 15 12:52:12.083628 kubelet[2338]: I0515 12:52:12.083604 2338 kubelet_node_status.go:72] "Attempting to register node" node="172-232-9-197" May 15 12:52:12.083928 kubelet[2338]: E0515 12:52:12.083890 2338 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.232.9.197:6443/api/v1/nodes\": dial tcp 172.232.9.197:6443: connect: connection refused" node="172-232-9-197" May 15 12:52:12.116672 kubelet[2338]: I0515 12:52:12.116500 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e12bced349590305f32ab53204afac2a-k8s-certs\") pod \"kube-apiserver-172-232-9-197\" (UID: \"e12bced349590305f32ab53204afac2a\") " pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:52:12.116672 kubelet[2338]: I0515 12:52:12.116527 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e12bced349590305f32ab53204afac2a-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-9-197\" (UID: \"e12bced349590305f32ab53204afac2a\") " pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:52:12.116672 kubelet[2338]: I0515 12:52:12.116547 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7cd367d19d62ce7e23d64221a34a0937-flexvolume-dir\") pod \"kube-controller-manager-172-232-9-197\" (UID: \"7cd367d19d62ce7e23d64221a34a0937\") " pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:12.116672 kubelet[2338]: I0515 12:52:12.116562 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cd367d19d62ce7e23d64221a34a0937-k8s-certs\") pod \"kube-controller-manager-172-232-9-197\" (UID: \"7cd367d19d62ce7e23d64221a34a0937\") " pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:12.116672 kubelet[2338]: I0515 12:52:12.116603 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7cd367d19d62ce7e23d64221a34a0937-kubeconfig\") pod \"kube-controller-manager-172-232-9-197\" (UID: \"7cd367d19d62ce7e23d64221a34a0937\") " pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:12.116840 kubelet[2338]: I0515 12:52:12.116620 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cd367d19d62ce7e23d64221a34a0937-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-9-197\" (UID: \"7cd367d19d62ce7e23d64221a34a0937\") " pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:12.116840 kubelet[2338]: I0515 12:52:12.116636 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e668700df5eb478e4ddee5a7cf86272-kubeconfig\") pod \"kube-scheduler-172-232-9-197\" (UID: \"4e668700df5eb478e4ddee5a7cf86272\") " pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:52:12.116840 kubelet[2338]: I0515 12:52:12.116649 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e12bced349590305f32ab53204afac2a-ca-certs\") pod \"kube-apiserver-172-232-9-197\" (UID: \"e12bced349590305f32ab53204afac2a\") " pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:52:12.116840 kubelet[2338]: I0515 12:52:12.116661 2338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cd367d19d62ce7e23d64221a34a0937-ca-certs\") pod \"kube-controller-manager-172-232-9-197\" (UID: \"7cd367d19d62ce7e23d64221a34a0937\") " pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:12.124735 kubelet[2338]: E0515 12:52:12.124710 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.9.197:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-9-197?timeout=10s\": dial tcp 172.232.9.197:6443: connect: connection refused" interval="400ms" May 15 12:52:12.286269 kubelet[2338]: I0515 12:52:12.286209 2338 kubelet_node_status.go:72] "Attempting to register node" node="172-232-9-197" May 15 12:52:12.286769 kubelet[2338]: E0515 12:52:12.286742 2338 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.232.9.197:6443/api/v1/nodes\": dial tcp 172.232.9.197:6443: connect: connection refused" node="172-232-9-197" May 15 12:52:12.362399 kubelet[2338]: E0515 12:52:12.362361 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:12.363034 containerd[1536]: time="2025-05-15T12:52:12.362996939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-9-197,Uid:e12bced349590305f32ab53204afac2a,Namespace:kube-system,Attempt:0,}" May 15 12:52:12.377739 kubelet[2338]: E0515 12:52:12.377647 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:12.378019 containerd[1536]: time="2025-05-15T12:52:12.377989657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-9-197,Uid:7cd367d19d62ce7e23d64221a34a0937,Namespace:kube-system,Attempt:0,}" May 15 12:52:12.382662 kubelet[2338]: E0515 12:52:12.382153 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:12.383404 containerd[1536]: time="2025-05-15T12:52:12.383364830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-9-197,Uid:4e668700df5eb478e4ddee5a7cf86272,Namespace:kube-system,Attempt:0,}" May 15 12:52:12.384975 containerd[1536]: time="2025-05-15T12:52:12.384953320Z" level=info msg="connecting to shim 9676b46290fc133577648184e05086f03ccaf9ce0ef82e3c47325ac52bf0706f" address="unix:///run/containerd/s/35b7a8aba3c2518bcf8b40f757383055450a1a18477ce1cb557380a5161b4e96" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:12.412852 containerd[1536]: time="2025-05-15T12:52:12.412805024Z" level=info msg="connecting to shim d410f24273b3dcbc9e3e8eaca8a98f8d16529589f99e08b07f9c862150e92170" address="unix:///run/containerd/s/e928626d86f2fba12ca90faa6e24bba6800456457c713374bd4ef9ada4410be3" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:12.435279 containerd[1536]: time="2025-05-15T12:52:12.435138265Z" level=info msg="connecting to shim 870452caf042b160543478bc0076e5a4afef3944615823f9dc907fa8e5b6133a" address="unix:///run/containerd/s/1434a36693cfbcd1b09ecd70610f32ad108db3dcff9ea8a1eedec595cc82d903" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:12.440836 systemd[1]: Started cri-containerd-9676b46290fc133577648184e05086f03ccaf9ce0ef82e3c47325ac52bf0706f.scope - libcontainer container 9676b46290fc133577648184e05086f03ccaf9ce0ef82e3c47325ac52bf0706f. May 15 12:52:12.443337 systemd[1]: Started cri-containerd-d410f24273b3dcbc9e3e8eaca8a98f8d16529589f99e08b07f9c862150e92170.scope - libcontainer container d410f24273b3dcbc9e3e8eaca8a98f8d16529589f99e08b07f9c862150e92170. May 15 12:52:12.468811 systemd[1]: Started cri-containerd-870452caf042b160543478bc0076e5a4afef3944615823f9dc907fa8e5b6133a.scope - libcontainer container 870452caf042b160543478bc0076e5a4afef3944615823f9dc907fa8e5b6133a. May 15 12:52:12.516826 containerd[1536]: time="2025-05-15T12:52:12.516755666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-9-197,Uid:7cd367d19d62ce7e23d64221a34a0937,Namespace:kube-system,Attempt:0,} returns sandbox id \"d410f24273b3dcbc9e3e8eaca8a98f8d16529589f99e08b07f9c862150e92170\"" May 15 12:52:12.518305 kubelet[2338]: E0515 12:52:12.518271 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:12.521211 containerd[1536]: time="2025-05-15T12:52:12.521178138Z" level=info msg="CreateContainer within sandbox \"d410f24273b3dcbc9e3e8eaca8a98f8d16529589f99e08b07f9c862150e92170\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 12:52:12.526301 kubelet[2338]: E0515 12:52:12.526266 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.9.197:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-9-197?timeout=10s\": dial tcp 172.232.9.197:6443: connect: connection refused" interval="800ms" May 15 12:52:12.536347 containerd[1536]: time="2025-05-15T12:52:12.536310526Z" level=info msg="Container 7610435244cd7bd48bc12666dcf438d51fe678bee83bb465898071e2b3baf7a3: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:12.553560 containerd[1536]: time="2025-05-15T12:52:12.553490485Z" level=info msg="CreateContainer within sandbox \"d410f24273b3dcbc9e3e8eaca8a98f8d16529589f99e08b07f9c862150e92170\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7610435244cd7bd48bc12666dcf438d51fe678bee83bb465898071e2b3baf7a3\"" May 15 12:52:12.553954 containerd[1536]: time="2025-05-15T12:52:12.553904005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-9-197,Uid:e12bced349590305f32ab53204afac2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9676b46290fc133577648184e05086f03ccaf9ce0ef82e3c47325ac52bf0706f\"" May 15 12:52:12.554484 containerd[1536]: time="2025-05-15T12:52:12.554414425Z" level=info msg="StartContainer for \"7610435244cd7bd48bc12666dcf438d51fe678bee83bb465898071e2b3baf7a3\"" May 15 12:52:12.557129 containerd[1536]: time="2025-05-15T12:52:12.557095976Z" level=info msg="connecting to shim 7610435244cd7bd48bc12666dcf438d51fe678bee83bb465898071e2b3baf7a3" address="unix:///run/containerd/s/e928626d86f2fba12ca90faa6e24bba6800456457c713374bd4ef9ada4410be3" protocol=ttrpc version=3 May 15 12:52:12.557914 kubelet[2338]: E0515 12:52:12.557717 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:12.561162 containerd[1536]: time="2025-05-15T12:52:12.561105938Z" level=info msg="CreateContainer within sandbox \"9676b46290fc133577648184e05086f03ccaf9ce0ef82e3c47325ac52bf0706f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 12:52:12.569317 containerd[1536]: time="2025-05-15T12:52:12.569253423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-9-197,Uid:4e668700df5eb478e4ddee5a7cf86272,Namespace:kube-system,Attempt:0,} returns sandbox id \"870452caf042b160543478bc0076e5a4afef3944615823f9dc907fa8e5b6133a\"" May 15 12:52:12.570079 containerd[1536]: time="2025-05-15T12:52:12.569854633Z" level=info msg="Container 1a54889ac2dd65a07ac665469ffdb3c40d843d038029d802ce96f5e7a2a96780: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:12.570385 kubelet[2338]: E0515 12:52:12.570196 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:12.573727 containerd[1536]: time="2025-05-15T12:52:12.573007864Z" level=info msg="CreateContainer within sandbox \"870452caf042b160543478bc0076e5a4afef3944615823f9dc907fa8e5b6133a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 12:52:12.580683 containerd[1536]: time="2025-05-15T12:52:12.580662288Z" level=info msg="CreateContainer within sandbox \"9676b46290fc133577648184e05086f03ccaf9ce0ef82e3c47325ac52bf0706f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a54889ac2dd65a07ac665469ffdb3c40d843d038029d802ce96f5e7a2a96780\"" May 15 12:52:12.581357 containerd[1536]: time="2025-05-15T12:52:12.581338669Z" level=info msg="StartContainer for \"1a54889ac2dd65a07ac665469ffdb3c40d843d038029d802ce96f5e7a2a96780\"" May 15 12:52:12.582201 containerd[1536]: time="2025-05-15T12:52:12.582178569Z" level=info msg="connecting to shim 1a54889ac2dd65a07ac665469ffdb3c40d843d038029d802ce96f5e7a2a96780" address="unix:///run/containerd/s/35b7a8aba3c2518bcf8b40f757383055450a1a18477ce1cb557380a5161b4e96" protocol=ttrpc version=3 May 15 12:52:12.587687 systemd[1]: Started cri-containerd-7610435244cd7bd48bc12666dcf438d51fe678bee83bb465898071e2b3baf7a3.scope - libcontainer container 7610435244cd7bd48bc12666dcf438d51fe678bee83bb465898071e2b3baf7a3. May 15 12:52:12.590930 containerd[1536]: time="2025-05-15T12:52:12.590909933Z" level=info msg="Container 5009838b7d4f682858e339dfab3bebd967ae2bd973f6aef31ae2cde69cfb267a: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:12.603315 containerd[1536]: time="2025-05-15T12:52:12.602959629Z" level=info msg="CreateContainer within sandbox \"870452caf042b160543478bc0076e5a4afef3944615823f9dc907fa8e5b6133a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5009838b7d4f682858e339dfab3bebd967ae2bd973f6aef31ae2cde69cfb267a\"" May 15 12:52:12.604832 containerd[1536]: time="2025-05-15T12:52:12.604815640Z" level=info msg="StartContainer for \"5009838b7d4f682858e339dfab3bebd967ae2bd973f6aef31ae2cde69cfb267a\"" May 15 12:52:12.606556 containerd[1536]: time="2025-05-15T12:52:12.606534371Z" level=info msg="connecting to shim 5009838b7d4f682858e339dfab3bebd967ae2bd973f6aef31ae2cde69cfb267a" address="unix:///run/containerd/s/1434a36693cfbcd1b09ecd70610f32ad108db3dcff9ea8a1eedec595cc82d903" protocol=ttrpc version=3 May 15 12:52:12.608760 systemd[1]: Started cri-containerd-1a54889ac2dd65a07ac665469ffdb3c40d843d038029d802ce96f5e7a2a96780.scope - libcontainer container 1a54889ac2dd65a07ac665469ffdb3c40d843d038029d802ce96f5e7a2a96780. May 15 12:52:12.636650 systemd[1]: Started cri-containerd-5009838b7d4f682858e339dfab3bebd967ae2bd973f6aef31ae2cde69cfb267a.scope - libcontainer container 5009838b7d4f682858e339dfab3bebd967ae2bd973f6aef31ae2cde69cfb267a. May 15 12:52:12.685625 containerd[1536]: time="2025-05-15T12:52:12.685572551Z" level=info msg="StartContainer for \"7610435244cd7bd48bc12666dcf438d51fe678bee83bb465898071e2b3baf7a3\" returns successfully" May 15 12:52:12.690315 kubelet[2338]: I0515 12:52:12.690292 2338 kubelet_node_status.go:72] "Attempting to register node" node="172-232-9-197" May 15 12:52:12.691345 kubelet[2338]: E0515 12:52:12.691315 2338 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.232.9.197:6443/api/v1/nodes\": dial tcp 172.232.9.197:6443: connect: connection refused" node="172-232-9-197" May 15 12:52:12.722827 containerd[1536]: time="2025-05-15T12:52:12.722785969Z" level=info msg="StartContainer for \"1a54889ac2dd65a07ac665469ffdb3c40d843d038029d802ce96f5e7a2a96780\" returns successfully" May 15 12:52:12.775479 containerd[1536]: time="2025-05-15T12:52:12.775410116Z" level=info msg="StartContainer for \"5009838b7d4f682858e339dfab3bebd967ae2bd973f6aef31ae2cde69cfb267a\" returns successfully" May 15 12:52:12.951521 kubelet[2338]: E0515 12:52:12.951383 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:12.959799 kubelet[2338]: E0515 12:52:12.959308 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:12.965897 kubelet[2338]: E0515 12:52:12.965771 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:13.513269 kubelet[2338]: I0515 12:52:13.501635 2338 kubelet_node_status.go:72] "Attempting to register node" node="172-232-9-197" May 15 12:52:13.981726 kubelet[2338]: E0515 12:52:13.981672 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:15.240498 kubelet[2338]: E0515 12:52:15.240001 2338 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-9-197\" not found" node="172-232-9-197" May 15 12:52:15.295482 kubelet[2338]: I0515 12:52:15.294415 2338 kubelet_node_status.go:75] "Successfully registered node" node="172-232-9-197" May 15 12:52:16.015928 kubelet[2338]: I0515 12:52:16.015884 2338 apiserver.go:52] "Watching apiserver" May 15 12:52:16.116906 kubelet[2338]: I0515 12:52:16.116869 2338 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 12:52:17.344155 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 15 12:52:17.530737 systemd[1]: Reload requested from client PID 2608 ('systemctl') (unit session-7.scope)... May 15 12:52:17.530753 systemd[1]: Reloading... May 15 12:52:17.828547 zram_generator::config[2651]: No configuration found. May 15 12:52:17.884539 kubelet[2338]: E0515 12:52:17.884498 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:17.934441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:52:18.024270 kubelet[2338]: E0515 12:52:18.024231 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:18.038611 kubelet[2338]: E0515 12:52:18.038579 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:18.049493 systemd[1]: Reloading finished in 518 ms. May 15 12:52:18.083164 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:52:18.103784 systemd[1]: kubelet.service: Deactivated successfully. May 15 12:52:18.104132 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:18.104192 systemd[1]: kubelet.service: Consumed 701ms CPU time, 114.8M memory peak. May 15 12:52:18.107523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:52:18.272350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:18.286531 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:52:18.356764 kubelet[2701]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:52:18.356764 kubelet[2701]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 12:52:18.356764 kubelet[2701]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:52:18.357082 kubelet[2701]: I0515 12:52:18.356742 2701 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:52:18.364494 kubelet[2701]: I0515 12:52:18.363782 2701 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 12:52:18.364494 kubelet[2701]: I0515 12:52:18.363804 2701 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:52:18.364494 kubelet[2701]: I0515 12:52:18.364011 2701 server.go:929] "Client rotation is on, will bootstrap in background" May 15 12:52:18.365303 kubelet[2701]: I0515 12:52:18.365287 2701 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 12:52:18.366901 kubelet[2701]: I0515 12:52:18.366884 2701 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:52:18.373813 kubelet[2701]: I0515 12:52:18.373786 2701 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:52:18.376974 kubelet[2701]: I0515 12:52:18.376946 2701 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:52:18.377065 kubelet[2701]: I0515 12:52:18.377044 2701 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 12:52:18.377201 kubelet[2701]: I0515 12:52:18.377161 2701 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:52:18.377332 kubelet[2701]: I0515 12:52:18.377188 2701 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-9-197","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:52:18.377332 kubelet[2701]: I0515 12:52:18.377325 2701 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:52:18.377332 kubelet[2701]: I0515 12:52:18.377333 2701 container_manager_linux.go:300] "Creating device plugin manager" May 15 12:52:18.377473 kubelet[2701]: I0515 12:52:18.377364 2701 state_mem.go:36] "Initialized new in-memory state store" May 15 12:52:18.377504 kubelet[2701]: I0515 12:52:18.377493 2701 kubelet.go:408] "Attempting to sync node with API server" May 15 12:52:18.377533 kubelet[2701]: I0515 12:52:18.377505 2701 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:52:18.377927 kubelet[2701]: I0515 12:52:18.377903 2701 kubelet.go:314] "Adding apiserver pod source" May 15 12:52:18.380480 kubelet[2701]: I0515 12:52:18.378533 2701 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:52:18.380834 kubelet[2701]: I0515 12:52:18.380818 2701 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:52:18.381177 kubelet[2701]: I0515 12:52:18.381159 2701 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:52:18.382165 kubelet[2701]: I0515 12:52:18.382152 2701 server.go:1269] "Started kubelet" May 15 12:52:18.387597 kubelet[2701]: I0515 12:52:18.387525 2701 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:52:18.387877 kubelet[2701]: I0515 12:52:18.387851 2701 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:52:18.387948 kubelet[2701]: I0515 12:52:18.387919 2701 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:52:18.389139 kubelet[2701]: I0515 12:52:18.389118 2701 server.go:460] "Adding debug handlers to kubelet server" May 15 12:52:18.389508 kubelet[2701]: I0515 12:52:18.389495 2701 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:52:18.394585 kubelet[2701]: I0515 12:52:18.394567 2701 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:52:18.397113 kubelet[2701]: I0515 12:52:18.397080 2701 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 12:52:18.397715 kubelet[2701]: E0515 12:52:18.397616 2701 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-9-197\" not found" May 15 12:52:18.400452 kubelet[2701]: I0515 12:52:18.400315 2701 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 12:52:18.400598 kubelet[2701]: I0515 12:52:18.400576 2701 reconciler.go:26] "Reconciler: start to sync state" May 15 12:52:18.402300 kubelet[2701]: E0515 12:52:18.402173 2701 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:52:18.402556 kubelet[2701]: I0515 12:52:18.402526 2701 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:52:18.403943 kubelet[2701]: I0515 12:52:18.403927 2701 factory.go:221] Registration of the systemd container factory successfully May 15 12:52:18.404090 kubelet[2701]: I0515 12:52:18.404072 2701 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:52:18.406120 kubelet[2701]: I0515 12:52:18.404102 2701 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:52:18.406206 kubelet[2701]: I0515 12:52:18.406196 2701 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 12:52:18.406271 kubelet[2701]: I0515 12:52:18.406251 2701 kubelet.go:2321] "Starting kubelet main sync loop" May 15 12:52:18.406412 kubelet[2701]: E0515 12:52:18.406390 2701 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:52:18.411341 kubelet[2701]: I0515 12:52:18.411310 2701 factory.go:221] Registration of the containerd container factory successfully May 15 12:52:18.458173 kubelet[2701]: I0515 12:52:18.458149 2701 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 12:52:18.458173 kubelet[2701]: I0515 12:52:18.458167 2701 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 12:52:18.458259 kubelet[2701]: I0515 12:52:18.458183 2701 state_mem.go:36] "Initialized new in-memory state store" May 15 12:52:18.458375 kubelet[2701]: I0515 12:52:18.458306 2701 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 12:52:18.458375 kubelet[2701]: I0515 12:52:18.458324 2701 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 12:52:18.458375 kubelet[2701]: I0515 12:52:18.458343 2701 policy_none.go:49] "None policy: Start" May 15 12:52:18.459044 kubelet[2701]: I0515 12:52:18.459023 2701 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 12:52:18.459044 kubelet[2701]: I0515 12:52:18.459046 2701 state_mem.go:35] "Initializing new in-memory state store" May 15 12:52:18.459191 kubelet[2701]: I0515 12:52:18.459158 2701 state_mem.go:75] "Updated machine memory state" May 15 12:52:18.473764 kubelet[2701]: I0515 12:52:18.473577 2701 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:52:18.474078 kubelet[2701]: I0515 12:52:18.473816 2701 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:52:18.474078 kubelet[2701]: I0515 12:52:18.473857 2701 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:52:18.474216 kubelet[2701]: I0515 12:52:18.474191 2701 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:52:18.519743 kubelet[2701]: E0515 12:52:18.519707 2701 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-172-232-9-197\" already exists" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:52:18.522864 kubelet[2701]: E0515 12:52:18.522845 2701 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-172-232-9-197\" already exists" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:18.581522 kubelet[2701]: I0515 12:52:18.581489 2701 kubelet_node_status.go:72] "Attempting to register node" node="172-232-9-197" May 15 12:52:18.604228 kubelet[2701]: I0515 12:52:18.604186 2701 kubelet_node_status.go:111] "Node was previously registered" node="172-232-9-197" May 15 12:52:18.604321 kubelet[2701]: I0515 12:52:18.604239 2701 kubelet_node_status.go:75] "Successfully registered node" node="172-232-9-197" May 15 12:52:18.702214 kubelet[2701]: I0515 12:52:18.702089 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7cd367d19d62ce7e23d64221a34a0937-kubeconfig\") pod \"kube-controller-manager-172-232-9-197\" (UID: \"7cd367d19d62ce7e23d64221a34a0937\") " pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:18.702214 kubelet[2701]: I0515 12:52:18.702182 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e12bced349590305f32ab53204afac2a-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-9-197\" (UID: \"e12bced349590305f32ab53204afac2a\") " pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:52:18.702214 kubelet[2701]: I0515 12:52:18.702203 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7cd367d19d62ce7e23d64221a34a0937-ca-certs\") pod \"kube-controller-manager-172-232-9-197\" (UID: \"7cd367d19d62ce7e23d64221a34a0937\") " pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:18.703303 kubelet[2701]: I0515 12:52:18.702221 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7cd367d19d62ce7e23d64221a34a0937-flexvolume-dir\") pod \"kube-controller-manager-172-232-9-197\" (UID: \"7cd367d19d62ce7e23d64221a34a0937\") " pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:18.703303 kubelet[2701]: I0515 12:52:18.703299 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7cd367d19d62ce7e23d64221a34a0937-k8s-certs\") pod \"kube-controller-manager-172-232-9-197\" (UID: \"7cd367d19d62ce7e23d64221a34a0937\") " pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:18.703413 kubelet[2701]: I0515 12:52:18.703341 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e12bced349590305f32ab53204afac2a-ca-certs\") pod \"kube-apiserver-172-232-9-197\" (UID: \"e12bced349590305f32ab53204afac2a\") " pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:52:18.703413 kubelet[2701]: I0515 12:52:18.703356 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e12bced349590305f32ab53204afac2a-k8s-certs\") pod \"kube-apiserver-172-232-9-197\" (UID: \"e12bced349590305f32ab53204afac2a\") " pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:52:18.703413 kubelet[2701]: I0515 12:52:18.703377 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7cd367d19d62ce7e23d64221a34a0937-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-9-197\" (UID: \"7cd367d19d62ce7e23d64221a34a0937\") " pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:18.703716 kubelet[2701]: I0515 12:52:18.703421 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e668700df5eb478e4ddee5a7cf86272-kubeconfig\") pod \"kube-scheduler-172-232-9-197\" (UID: \"4e668700df5eb478e4ddee5a7cf86272\") " pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:52:18.820264 kubelet[2701]: E0515 12:52:18.819201 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:18.820768 kubelet[2701]: E0515 12:52:18.820715 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:18.823887 kubelet[2701]: E0515 12:52:18.823558 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:19.379771 kubelet[2701]: I0515 12:52:19.379553 2701 apiserver.go:52] "Watching apiserver" May 15 12:52:19.400895 kubelet[2701]: I0515 12:52:19.400861 2701 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 12:52:19.437010 kubelet[2701]: E0515 12:52:19.436959 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:19.438520 kubelet[2701]: E0515 12:52:19.437515 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:19.463278 kubelet[2701]: E0515 12:52:19.463059 2701 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-172-232-9-197\" already exists" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:52:19.463278 kubelet[2701]: E0515 12:52:19.463205 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:19.484815 kubelet[2701]: I0515 12:52:19.484751 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-9-197" podStartSLOduration=2.484739728 podStartE2EDuration="2.484739728s" podCreationTimestamp="2025-05-15 12:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:52:19.482877287 +0000 UTC m=+1.176787549" watchObservedRunningTime="2025-05-15 12:52:19.484739728 +0000 UTC m=+1.178649990" May 15 12:52:19.503768 kubelet[2701]: I0515 12:52:19.503693 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-9-197" podStartSLOduration=1.503680497 podStartE2EDuration="1.503680497s" podCreationTimestamp="2025-05-15 12:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:52:19.492729502 +0000 UTC m=+1.186639774" watchObservedRunningTime="2025-05-15 12:52:19.503680497 +0000 UTC m=+1.197590759" May 15 12:52:20.439480 kubelet[2701]: E0515 12:52:20.439419 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:20.441635 kubelet[2701]: E0515 12:52:20.440105 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:22.935169 kubelet[2701]: E0515 12:52:22.935011 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:22.945791 kubelet[2701]: I0515 12:52:22.945714 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-9-197" podStartSLOduration=4.945698827 podStartE2EDuration="4.945698827s" podCreationTimestamp="2025-05-15 12:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:52:19.504357598 +0000 UTC m=+1.198267870" watchObservedRunningTime="2025-05-15 12:52:22.945698827 +0000 UTC m=+4.639609099" May 15 12:52:23.171796 kubelet[2701]: I0515 12:52:23.171599 2701 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 12:52:23.172805 containerd[1536]: time="2025-05-15T12:52:23.172638191Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 12:52:23.174674 kubelet[2701]: I0515 12:52:23.173576 2701 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 12:52:23.445832 kubelet[2701]: E0515 12:52:23.445719 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:23.873701 kubelet[2701]: E0515 12:52:23.872660 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:23.892308 systemd[1]: Created slice kubepods-besteffort-pod6a42562e_8d1d_44ec_9b9a_a72e0f9484d8.slice - libcontainer container kubepods-besteffort-pod6a42562e_8d1d_44ec_9b9a_a72e0f9484d8.slice. May 15 12:52:24.000489 kubelet[2701]: I0515 12:52:24.000435 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jg5c\" (UniqueName: \"kubernetes.io/projected/6a42562e-8d1d-44ec-9b9a-a72e0f9484d8-kube-api-access-8jg5c\") pod \"kube-proxy-rhz8r\" (UID: \"6a42562e-8d1d-44ec-9b9a-a72e0f9484d8\") " pod="kube-system/kube-proxy-rhz8r" May 15 12:52:24.000489 kubelet[2701]: I0515 12:52:24.000689 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a42562e-8d1d-44ec-9b9a-a72e0f9484d8-kube-proxy\") pod \"kube-proxy-rhz8r\" (UID: \"6a42562e-8d1d-44ec-9b9a-a72e0f9484d8\") " pod="kube-system/kube-proxy-rhz8r" May 15 12:52:24.001120 kubelet[2701]: I0515 12:52:24.000712 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a42562e-8d1d-44ec-9b9a-a72e0f9484d8-xtables-lock\") pod \"kube-proxy-rhz8r\" (UID: \"6a42562e-8d1d-44ec-9b9a-a72e0f9484d8\") " pod="kube-system/kube-proxy-rhz8r" May 15 12:52:24.001120 kubelet[2701]: I0515 12:52:24.000748 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a42562e-8d1d-44ec-9b9a-a72e0f9484d8-lib-modules\") pod \"kube-proxy-rhz8r\" (UID: \"6a42562e-8d1d-44ec-9b9a-a72e0f9484d8\") " pod="kube-system/kube-proxy-rhz8r" May 15 12:52:24.184105 sudo[1792]: pam_unix(sudo:session): session closed for user root May 15 12:52:24.202260 kubelet[2701]: E0515 12:52:24.201899 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:24.202701 containerd[1536]: time="2025-05-15T12:52:24.202660465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhz8r,Uid:6a42562e-8d1d-44ec-9b9a-a72e0f9484d8,Namespace:kube-system,Attempt:0,}" May 15 12:52:24.229854 containerd[1536]: time="2025-05-15T12:52:24.229816559Z" level=info msg="connecting to shim 41751833c3e44820567a87db88b6c17afa7b066dd17b3ef86d1a305e21cc57f2" address="unix:///run/containerd/s/4988ee5392be65bc975ce22d8f5ae0f54d1dcc2ec1f534227d6f28c464ec4393" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:24.236496 sshd[1791]: Connection closed by 139.178.89.65 port 40714 May 15 12:52:24.239053 sshd-session[1789]: pam_unix(sshd:session): session closed for user core May 15 12:52:24.247601 systemd[1]: sshd@6-172.232.9.197:22-139.178.89.65:40714.service: Deactivated successfully. May 15 12:52:24.251528 systemd[1]: session-7.scope: Deactivated successfully. May 15 12:52:24.252148 systemd[1]: session-7.scope: Consumed 4.195s CPU time, 232.4M memory peak. May 15 12:52:24.255554 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. May 15 12:52:24.266716 systemd[1]: Started cri-containerd-41751833c3e44820567a87db88b6c17afa7b066dd17b3ef86d1a305e21cc57f2.scope - libcontainer container 41751833c3e44820567a87db88b6c17afa7b066dd17b3ef86d1a305e21cc57f2. May 15 12:52:24.268374 systemd-logind[1520]: Removed session 7. May 15 12:52:24.296845 systemd[1]: Created slice kubepods-besteffort-podad96d495_0fda_41e6_92fb_0740b7461b77.slice - libcontainer container kubepods-besteffort-podad96d495_0fda_41e6_92fb_0740b7461b77.slice. May 15 12:52:24.310184 containerd[1536]: time="2025-05-15T12:52:24.310153829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhz8r,Uid:6a42562e-8d1d-44ec-9b9a-a72e0f9484d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"41751833c3e44820567a87db88b6c17afa7b066dd17b3ef86d1a305e21cc57f2\"" May 15 12:52:24.310969 kubelet[2701]: E0515 12:52:24.310949 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:24.313402 containerd[1536]: time="2025-05-15T12:52:24.313289430Z" level=info msg="CreateContainer within sandbox \"41751833c3e44820567a87db88b6c17afa7b066dd17b3ef86d1a305e21cc57f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 12:52:24.325487 containerd[1536]: time="2025-05-15T12:52:24.325410417Z" level=info msg="Container 69557a38c24409b433c2570185980d3ddd3bfd904c9046c27056ebf925b6c268: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:24.330498 containerd[1536]: time="2025-05-15T12:52:24.330446599Z" level=info msg="CreateContainer within sandbox \"41751833c3e44820567a87db88b6c17afa7b066dd17b3ef86d1a305e21cc57f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"69557a38c24409b433c2570185980d3ddd3bfd904c9046c27056ebf925b6c268\"" May 15 12:52:24.331704 containerd[1536]: time="2025-05-15T12:52:24.331646780Z" level=info msg="StartContainer for \"69557a38c24409b433c2570185980d3ddd3bfd904c9046c27056ebf925b6c268\"" May 15 12:52:24.334181 containerd[1536]: time="2025-05-15T12:52:24.334147521Z" level=info msg="connecting to shim 69557a38c24409b433c2570185980d3ddd3bfd904c9046c27056ebf925b6c268" address="unix:///run/containerd/s/4988ee5392be65bc975ce22d8f5ae0f54d1dcc2ec1f534227d6f28c464ec4393" protocol=ttrpc version=3 May 15 12:52:24.354585 systemd[1]: Started cri-containerd-69557a38c24409b433c2570185980d3ddd3bfd904c9046c27056ebf925b6c268.scope - libcontainer container 69557a38c24409b433c2570185980d3ddd3bfd904c9046c27056ebf925b6c268. May 15 12:52:24.402773 kubelet[2701]: I0515 12:52:24.402641 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ad96d495-0fda-41e6-92fb-0740b7461b77-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-8rg8f\" (UID: \"ad96d495-0fda-41e6-92fb-0740b7461b77\") " pod="tigera-operator/tigera-operator-6f6897fdc5-8rg8f" May 15 12:52:24.402773 kubelet[2701]: I0515 12:52:24.402684 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ffzt\" (UniqueName: \"kubernetes.io/projected/ad96d495-0fda-41e6-92fb-0740b7461b77-kube-api-access-2ffzt\") pod \"tigera-operator-6f6897fdc5-8rg8f\" (UID: \"ad96d495-0fda-41e6-92fb-0740b7461b77\") " pod="tigera-operator/tigera-operator-6f6897fdc5-8rg8f" May 15 12:52:24.421228 containerd[1536]: time="2025-05-15T12:52:24.421094034Z" level=info msg="StartContainer for \"69557a38c24409b433c2570185980d3ddd3bfd904c9046c27056ebf925b6c268\" returns successfully" May 15 12:52:24.455818 kubelet[2701]: E0515 12:52:24.452848 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:24.455974 kubelet[2701]: E0515 12:52:24.453048 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:24.470819 kubelet[2701]: I0515 12:52:24.469932 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rhz8r" podStartSLOduration=1.469919419 podStartE2EDuration="1.469919419s" podCreationTimestamp="2025-05-15 12:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:52:24.469447248 +0000 UTC m=+6.163357510" watchObservedRunningTime="2025-05-15 12:52:24.469919419 +0000 UTC m=+6.163829681" May 15 12:52:24.602481 containerd[1536]: time="2025-05-15T12:52:24.602413015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-8rg8f,Uid:ad96d495-0fda-41e6-92fb-0740b7461b77,Namespace:tigera-operator,Attempt:0,}" May 15 12:52:24.631694 containerd[1536]: time="2025-05-15T12:52:24.630872099Z" level=info msg="connecting to shim 7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1" address="unix:///run/containerd/s/e11a92f5e38a3bbfbf34712eaf10e83ac58da64dd10d853ee6e0b62926c6b095" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:24.669782 systemd[1]: Started cri-containerd-7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1.scope - libcontainer container 7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1. May 15 12:52:24.727970 containerd[1536]: time="2025-05-15T12:52:24.727853838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-8rg8f,Uid:ad96d495-0fda-41e6-92fb-0740b7461b77,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\"" May 15 12:52:24.730633 containerd[1536]: time="2025-05-15T12:52:24.730604419Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 12:52:25.455958 kubelet[2701]: E0515 12:52:25.455670 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:25.552358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187581559.mount: Deactivated successfully. May 15 12:52:26.656328 containerd[1536]: time="2025-05-15T12:52:26.656274091Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:26.657078 containerd[1536]: time="2025-05-15T12:52:26.656882851Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 15 12:52:26.657262 containerd[1536]: time="2025-05-15T12:52:26.657241442Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:26.658619 containerd[1536]: time="2025-05-15T12:52:26.658586912Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:26.659252 containerd[1536]: time="2025-05-15T12:52:26.659115903Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.928383054s" May 15 12:52:26.659252 containerd[1536]: time="2025-05-15T12:52:26.659142023Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 12:52:26.661311 containerd[1536]: time="2025-05-15T12:52:26.661285424Z" level=info msg="CreateContainer within sandbox \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 12:52:26.665910 containerd[1536]: time="2025-05-15T12:52:26.665878176Z" level=info msg="Container 313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:26.672287 containerd[1536]: time="2025-05-15T12:52:26.672255789Z" level=info msg="CreateContainer within sandbox \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\"" May 15 12:52:26.672889 containerd[1536]: time="2025-05-15T12:52:26.672865579Z" level=info msg="StartContainer for \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\"" May 15 12:52:26.673544 containerd[1536]: time="2025-05-15T12:52:26.673525130Z" level=info msg="connecting to shim 313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b" address="unix:///run/containerd/s/e11a92f5e38a3bbfbf34712eaf10e83ac58da64dd10d853ee6e0b62926c6b095" protocol=ttrpc version=3 May 15 12:52:26.716582 systemd[1]: Started cri-containerd-313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b.scope - libcontainer container 313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b. May 15 12:52:26.777683 containerd[1536]: time="2025-05-15T12:52:26.777644342Z" level=info msg="StartContainer for \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" returns successfully" May 15 12:52:27.473878 kubelet[2701]: I0515 12:52:27.473763 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-8rg8f" podStartSLOduration=1.543306855 podStartE2EDuration="3.4737437s" podCreationTimestamp="2025-05-15 12:52:24 +0000 UTC" firstStartedPulling="2025-05-15 12:52:24.729439008 +0000 UTC m=+6.423349270" lastFinishedPulling="2025-05-15 12:52:26.659875853 +0000 UTC m=+8.353786115" observedRunningTime="2025-05-15 12:52:27.472865599 +0000 UTC m=+9.166775861" watchObservedRunningTime="2025-05-15 12:52:27.4737437 +0000 UTC m=+9.167653982" May 15 12:52:29.019934 kubelet[2701]: E0515 12:52:29.019873 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:30.221233 systemd[1]: Created slice kubepods-besteffort-pod3c985bdd_efe4_4486_b443_b9f4831fd9f5.slice - libcontainer container kubepods-besteffort-pod3c985bdd_efe4_4486_b443_b9f4831fd9f5.slice. May 15 12:52:30.225441 kubelet[2701]: I0515 12:52:30.225259 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c985bdd-efe4-4486-b443-b9f4831fd9f5-tigera-ca-bundle\") pod \"calico-typha-59b79bbb46-9qqgw\" (UID: \"3c985bdd-efe4-4486-b443-b9f4831fd9f5\") " pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:52:30.225441 kubelet[2701]: I0515 12:52:30.225303 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3c985bdd-efe4-4486-b443-b9f4831fd9f5-typha-certs\") pod \"calico-typha-59b79bbb46-9qqgw\" (UID: \"3c985bdd-efe4-4486-b443-b9f4831fd9f5\") " pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:52:30.225441 kubelet[2701]: I0515 12:52:30.225319 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvmq\" (UniqueName: \"kubernetes.io/projected/3c985bdd-efe4-4486-b443-b9f4831fd9f5-kube-api-access-ckvmq\") pod \"calico-typha-59b79bbb46-9qqgw\" (UID: \"3c985bdd-efe4-4486-b443-b9f4831fd9f5\") " pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:52:30.305499 systemd[1]: Created slice kubepods-besteffort-podf69f6551_7509_4bf1_a40a_1170e25b66f0.slice - libcontainer container kubepods-besteffort-podf69f6551_7509_4bf1_a40a_1170e25b66f0.slice. May 15 12:52:30.327570 kubelet[2701]: I0515 12:52:30.327542 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f69f6551-7509-4bf1-a40a-1170e25b66f0-var-run-calico\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.327696 kubelet[2701]: I0515 12:52:30.327683 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f69f6551-7509-4bf1-a40a-1170e25b66f0-cni-net-dir\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.327760 kubelet[2701]: I0515 12:52:30.327749 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f69f6551-7509-4bf1-a40a-1170e25b66f0-node-certs\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.327827 kubelet[2701]: I0515 12:52:30.327815 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f69f6551-7509-4bf1-a40a-1170e25b66f0-lib-modules\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.327880 kubelet[2701]: I0515 12:52:30.327870 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f69f6551-7509-4bf1-a40a-1170e25b66f0-policysync\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.327927 kubelet[2701]: I0515 12:52:30.327917 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f69f6551-7509-4bf1-a40a-1170e25b66f0-tigera-ca-bundle\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.327979 kubelet[2701]: I0515 12:52:30.327969 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f69f6551-7509-4bf1-a40a-1170e25b66f0-xtables-lock\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.328050 kubelet[2701]: I0515 12:52:30.328023 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f69f6551-7509-4bf1-a40a-1170e25b66f0-cni-bin-dir\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.328105 kubelet[2701]: I0515 12:52:30.328094 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zm75\" (UniqueName: \"kubernetes.io/projected/f69f6551-7509-4bf1-a40a-1170e25b66f0-kube-api-access-9zm75\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.328153 kubelet[2701]: I0515 12:52:30.328144 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f69f6551-7509-4bf1-a40a-1170e25b66f0-cni-log-dir\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.328234 kubelet[2701]: I0515 12:52:30.328209 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f69f6551-7509-4bf1-a40a-1170e25b66f0-var-lib-calico\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.328311 kubelet[2701]: I0515 12:52:30.328298 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f69f6551-7509-4bf1-a40a-1170e25b66f0-flexvol-driver-host\") pod \"calico-node-qxkgl\" (UID: \"f69f6551-7509-4bf1-a40a-1170e25b66f0\") " pod="calico-system/calico-node-qxkgl" May 15 12:52:30.432700 kubelet[2701]: E0515 12:52:30.432674 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.432937 kubelet[2701]: W0515 12:52:30.432823 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.436035 kubelet[2701]: E0515 12:52:30.435928 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.447368 kubelet[2701]: E0515 12:52:30.447013 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.447368 kubelet[2701]: W0515 12:52:30.447028 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.447368 kubelet[2701]: E0515 12:52:30.447043 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.447585 kubelet[2701]: E0515 12:52:30.447573 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.447641 kubelet[2701]: W0515 12:52:30.447630 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.447705 kubelet[2701]: E0515 12:52:30.447694 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.479682 kubelet[2701]: E0515 12:52:30.479587 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:30.528488 kubelet[2701]: E0515 12:52:30.527564 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.528681 kubelet[2701]: W0515 12:52:30.528619 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.528681 kubelet[2701]: E0515 12:52:30.528642 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.529149 kubelet[2701]: E0515 12:52:30.529095 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.529149 kubelet[2701]: W0515 12:52:30.529105 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.529149 kubelet[2701]: E0515 12:52:30.529113 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.529668 kubelet[2701]: E0515 12:52:30.529613 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.529668 kubelet[2701]: W0515 12:52:30.529623 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.529668 kubelet[2701]: E0515 12:52:30.529632 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.529913 kubelet[2701]: E0515 12:52:30.529881 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.529913 kubelet[2701]: W0515 12:52:30.529891 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.529913 kubelet[2701]: E0515 12:52:30.529899 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.530256 kubelet[2701]: E0515 12:52:30.530207 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.530256 kubelet[2701]: W0515 12:52:30.530216 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.530256 kubelet[2701]: E0515 12:52:30.530224 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.530572 kubelet[2701]: E0515 12:52:30.530448 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.530572 kubelet[2701]: W0515 12:52:30.530529 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.530572 kubelet[2701]: E0515 12:52:30.530538 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.530847 kubelet[2701]: E0515 12:52:30.530796 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.530847 kubelet[2701]: W0515 12:52:30.530806 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.530847 kubelet[2701]: E0515 12:52:30.530814 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.531105 kubelet[2701]: E0515 12:52:30.531046 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.531105 kubelet[2701]: W0515 12:52:30.531058 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.531105 kubelet[2701]: E0515 12:52:30.531082 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.532497 kubelet[2701]: E0515 12:52:30.531819 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:30.532715 kubelet[2701]: E0515 12:52:30.532702 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.533237 containerd[1536]: time="2025-05-15T12:52:30.532843681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59b79bbb46-9qqgw,Uid:3c985bdd-efe4-4486-b443-b9f4831fd9f5,Namespace:calico-system,Attempt:0,}" May 15 12:52:30.533578 kubelet[2701]: W0515 12:52:30.533132 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.533578 kubelet[2701]: E0515 12:52:30.533148 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.533578 kubelet[2701]: I0515 12:52:30.533166 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/30c809cf-5d96-45fe-9af3-2a80162d2f28-varrun\") pod \"csi-node-driver-d7hrh\" (UID: \"30c809cf-5d96-45fe-9af3-2a80162d2f28\") " pod="calico-system/csi-node-driver-d7hrh" May 15 12:52:30.534823 kubelet[2701]: E0515 12:52:30.534694 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.534823 kubelet[2701]: W0515 12:52:30.534705 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.534823 kubelet[2701]: E0515 12:52:30.534715 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.534823 kubelet[2701]: I0515 12:52:30.534728 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30c809cf-5d96-45fe-9af3-2a80162d2f28-kubelet-dir\") pod \"csi-node-driver-d7hrh\" (UID: \"30c809cf-5d96-45fe-9af3-2a80162d2f28\") " pod="calico-system/csi-node-driver-d7hrh" May 15 12:52:30.535056 kubelet[2701]: E0515 12:52:30.535032 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.535178 kubelet[2701]: W0515 12:52:30.535094 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.535178 kubelet[2701]: E0515 12:52:30.535107 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.535723 kubelet[2701]: E0515 12:52:30.535591 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.535723 kubelet[2701]: W0515 12:52:30.535602 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.535723 kubelet[2701]: E0515 12:52:30.535611 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.537613 kubelet[2701]: E0515 12:52:30.537523 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.537613 kubelet[2701]: W0515 12:52:30.537535 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.537613 kubelet[2701]: E0515 12:52:30.537557 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.538107 kubelet[2701]: E0515 12:52:30.538085 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.538107 kubelet[2701]: W0515 12:52:30.538095 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.538934 kubelet[2701]: E0515 12:52:30.538897 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.538934 kubelet[2701]: W0515 12:52:30.538928 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.539006 kubelet[2701]: E0515 12:52:30.538954 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.539141 kubelet[2701]: E0515 12:52:30.539059 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.539141 kubelet[2701]: E0515 12:52:30.539136 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.539220 kubelet[2701]: W0515 12:52:30.539144 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.539220 kubelet[2701]: E0515 12:52:30.539152 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.539633 kubelet[2701]: E0515 12:52:30.539609 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.539633 kubelet[2701]: W0515 12:52:30.539627 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.539633 kubelet[2701]: E0515 12:52:30.539635 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.540382 kubelet[2701]: E0515 12:52:30.540362 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.540382 kubelet[2701]: W0515 12:52:30.540376 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.540382 kubelet[2701]: E0515 12:52:30.540384 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.542613 kubelet[2701]: E0515 12:52:30.541524 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.542613 kubelet[2701]: W0515 12:52:30.541536 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.542613 kubelet[2701]: E0515 12:52:30.541545 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.542613 kubelet[2701]: E0515 12:52:30.541882 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.542613 kubelet[2701]: W0515 12:52:30.541889 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.542613 kubelet[2701]: E0515 12:52:30.541896 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.542613 kubelet[2701]: E0515 12:52:30.542008 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.542613 kubelet[2701]: W0515 12:52:30.542014 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.542613 kubelet[2701]: E0515 12:52:30.542021 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.542613 kubelet[2701]: E0515 12:52:30.542137 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.542961 kubelet[2701]: W0515 12:52:30.542143 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.542961 kubelet[2701]: E0515 12:52:30.542150 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.542961 kubelet[2701]: E0515 12:52:30.542271 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.542961 kubelet[2701]: W0515 12:52:30.542279 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.542961 kubelet[2701]: E0515 12:52:30.542286 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.542961 kubelet[2701]: E0515 12:52:30.542395 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.542961 kubelet[2701]: W0515 12:52:30.542401 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.542961 kubelet[2701]: E0515 12:52:30.542408 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.543310 kubelet[2701]: E0515 12:52:30.543292 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.543310 kubelet[2701]: W0515 12:52:30.543306 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.543310 kubelet[2701]: E0515 12:52:30.543314 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.543861 kubelet[2701]: E0515 12:52:30.543432 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.543861 kubelet[2701]: W0515 12:52:30.543439 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.543861 kubelet[2701]: E0515 12:52:30.543445 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.598662 containerd[1536]: time="2025-05-15T12:52:30.598583401Z" level=info msg="connecting to shim 4f06ab959585ba6d6d91e297ef8f98339fd635b48cf39bc18ecaa6b682a1116b" address="unix:///run/containerd/s/c42bff00c58c2cd668476e3bbd9d51babc1b125b1fc8c621c3276a52373aec13" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:30.688471 kubelet[2701]: E0515 12:52:30.687744 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.688471 kubelet[2701]: W0515 12:52:30.687765 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.688471 kubelet[2701]: E0515 12:52:30.687782 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.688471 kubelet[2701]: E0515 12:52:30.688068 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:30.689645 containerd[1536]: time="2025-05-15T12:52:30.689181593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qxkgl,Uid:f69f6551-7509-4bf1-a40a-1170e25b66f0,Namespace:calico-system,Attempt:0,}" May 15 12:52:30.689924 kubelet[2701]: E0515 12:52:30.689483 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.689924 kubelet[2701]: W0515 12:52:30.689493 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.689924 kubelet[2701]: E0515 12:52:30.689507 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.690799 kubelet[2701]: E0515 12:52:30.690786 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.691039 kubelet[2701]: W0515 12:52:30.691025 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.692224 kubelet[2701]: E0515 12:52:30.692045 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.692224 kubelet[2701]: I0515 12:52:30.692106 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/30c809cf-5d96-45fe-9af3-2a80162d2f28-socket-dir\") pod \"csi-node-driver-d7hrh\" (UID: \"30c809cf-5d96-45fe-9af3-2a80162d2f28\") " pod="calico-system/csi-node-driver-d7hrh" May 15 12:52:30.692807 kubelet[2701]: E0515 12:52:30.692719 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.692807 kubelet[2701]: W0515 12:52:30.692748 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.692807 kubelet[2701]: E0515 12:52:30.692774 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.692889 kubelet[2701]: I0515 12:52:30.692817 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs8kc\" (UniqueName: \"kubernetes.io/projected/30c809cf-5d96-45fe-9af3-2a80162d2f28-kube-api-access-fs8kc\") pod \"csi-node-driver-d7hrh\" (UID: \"30c809cf-5d96-45fe-9af3-2a80162d2f28\") " pod="calico-system/csi-node-driver-d7hrh" May 15 12:52:30.694873 kubelet[2701]: E0515 12:52:30.693704 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.694873 kubelet[2701]: W0515 12:52:30.693846 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.694873 kubelet[2701]: E0515 12:52:30.693859 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.694873 kubelet[2701]: I0515 12:52:30.693875 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30c809cf-5d96-45fe-9af3-2a80162d2f28-registration-dir\") pod \"csi-node-driver-d7hrh\" (UID: \"30c809cf-5d96-45fe-9af3-2a80162d2f28\") " pod="calico-system/csi-node-driver-d7hrh" May 15 12:52:30.694873 kubelet[2701]: E0515 12:52:30.694588 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.694873 kubelet[2701]: W0515 12:52:30.694600 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.694873 kubelet[2701]: E0515 12:52:30.694610 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.695127 kubelet[2701]: E0515 12:52:30.695095 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.695809 kubelet[2701]: W0515 12:52:30.695511 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.695809 kubelet[2701]: E0515 12:52:30.695532 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.696179 kubelet[2701]: E0515 12:52:30.696140 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.696179 kubelet[2701]: W0515 12:52:30.696169 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.696245 kubelet[2701]: E0515 12:52:30.696210 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.697408 kubelet[2701]: E0515 12:52:30.697382 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.697408 kubelet[2701]: W0515 12:52:30.697400 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.698448 kubelet[2701]: E0515 12:52:30.698384 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.698448 kubelet[2701]: W0515 12:52:30.698405 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.699516 kubelet[2701]: E0515 12:52:30.698969 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.699516 kubelet[2701]: E0515 12:52:30.699503 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.700337 kubelet[2701]: E0515 12:52:30.700307 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.700337 kubelet[2701]: W0515 12:52:30.700324 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.700514 kubelet[2701]: E0515 12:52:30.700492 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.700922 kubelet[2701]: E0515 12:52:30.700886 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.700922 kubelet[2701]: W0515 12:52:30.700924 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.701190 kubelet[2701]: E0515 12:52:30.701017 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.701676 kubelet[2701]: E0515 12:52:30.701582 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.701676 kubelet[2701]: W0515 12:52:30.701595 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.701746 kubelet[2701]: E0515 12:52:30.701682 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.702286 kubelet[2701]: E0515 12:52:30.702257 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.702397 kubelet[2701]: W0515 12:52:30.702372 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.702556 kubelet[2701]: E0515 12:52:30.702533 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.702964 kubelet[2701]: E0515 12:52:30.702944 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.703016 kubelet[2701]: W0515 12:52:30.702959 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.703429 kubelet[2701]: E0515 12:52:30.703127 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.704182 kubelet[2701]: E0515 12:52:30.704160 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.704182 kubelet[2701]: W0515 12:52:30.704176 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.704255 kubelet[2701]: E0515 12:52:30.704226 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.704666 kubelet[2701]: E0515 12:52:30.704642 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.704666 kubelet[2701]: W0515 12:52:30.704658 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.704733 kubelet[2701]: E0515 12:52:30.704724 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.704944 kubelet[2701]: E0515 12:52:30.704924 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.704944 kubelet[2701]: W0515 12:52:30.704937 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.705413 kubelet[2701]: E0515 12:52:30.705378 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.705613 kubelet[2701]: E0515 12:52:30.705598 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.705613 kubelet[2701]: W0515 12:52:30.705609 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.715376 kubelet[2701]: E0515 12:52:30.705617 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.725699 systemd[1]: Started cri-containerd-4f06ab959585ba6d6d91e297ef8f98339fd635b48cf39bc18ecaa6b682a1116b.scope - libcontainer container 4f06ab959585ba6d6d91e297ef8f98339fd635b48cf39bc18ecaa6b682a1116b. May 15 12:52:30.759018 containerd[1536]: time="2025-05-15T12:52:30.758751074Z" level=info msg="connecting to shim e51c2b8cf5a3e81b67fe010150dff461cfefb91677bdd6c4499f51f57d3dcee2" address="unix:///run/containerd/s/db215aaa64dd25f9aaecf2fa877acce3b58fc3006b91cf52e51663e75ec3ec90" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:30.796978 kubelet[2701]: E0515 12:52:30.796934 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.796978 kubelet[2701]: W0515 12:52:30.796997 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.797193 kubelet[2701]: E0515 12:52:30.797023 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.797301 kubelet[2701]: E0515 12:52:30.797278 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.797301 kubelet[2701]: W0515 12:52:30.797294 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.797378 kubelet[2701]: E0515 12:52:30.797316 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.798663 kubelet[2701]: E0515 12:52:30.798613 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.798663 kubelet[2701]: W0515 12:52:30.798632 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.798663 kubelet[2701]: E0515 12:52:30.798643 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.800290 kubelet[2701]: E0515 12:52:30.800149 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.800290 kubelet[2701]: W0515 12:52:30.800163 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.800290 kubelet[2701]: E0515 12:52:30.800218 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.800913 kubelet[2701]: E0515 12:52:30.800819 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.800913 kubelet[2701]: W0515 12:52:30.800833 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.800913 kubelet[2701]: E0515 12:52:30.800862 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.803378 kubelet[2701]: E0515 12:52:30.803350 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.803378 kubelet[2701]: W0515 12:52:30.803369 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.803583 kubelet[2701]: E0515 12:52:30.803532 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.803583 kubelet[2701]: W0515 12:52:30.803548 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.804009 kubelet[2701]: E0515 12:52:30.803952 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.804009 kubelet[2701]: W0515 12:52:30.803966 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.804078 kubelet[2701]: E0515 12:52:30.804014 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.804502 kubelet[2701]: E0515 12:52:30.804452 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.804538 kubelet[2701]: E0515 12:52:30.804516 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.804538 kubelet[2701]: E0515 12:52:30.804549 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.804538 kubelet[2701]: W0515 12:52:30.804556 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.804538 kubelet[2701]: E0515 12:52:30.804565 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.806814 kubelet[2701]: E0515 12:52:30.806788 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.806814 kubelet[2701]: W0515 12:52:30.806807 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.806904 kubelet[2701]: E0515 12:52:30.806847 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.810048 kubelet[2701]: E0515 12:52:30.807965 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.810048 kubelet[2701]: W0515 12:52:30.807982 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.810048 kubelet[2701]: E0515 12:52:30.808159 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.810048 kubelet[2701]: E0515 12:52:30.808558 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.810048 kubelet[2701]: W0515 12:52:30.808567 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.810048 kubelet[2701]: E0515 12:52:30.809068 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.810671 kubelet[2701]: E0515 12:52:30.810621 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.810671 kubelet[2701]: W0515 12:52:30.810637 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.812298 kubelet[2701]: E0515 12:52:30.812120 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.812600 kubelet[2701]: E0515 12:52:30.812421 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.812600 kubelet[2701]: W0515 12:52:30.812430 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.813773 kubelet[2701]: E0515 12:52:30.813253 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.814424 kubelet[2701]: E0515 12:52:30.814297 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.814862 kubelet[2701]: W0515 12:52:30.814632 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.814862 kubelet[2701]: E0515 12:52:30.814648 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:30.819893 systemd[1]: Started cri-containerd-e51c2b8cf5a3e81b67fe010150dff461cfefb91677bdd6c4499f51f57d3dcee2.scope - libcontainer container e51c2b8cf5a3e81b67fe010150dff461cfefb91677bdd6c4499f51f57d3dcee2. May 15 12:52:30.830775 kubelet[2701]: E0515 12:52:30.830664 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:30.830775 kubelet[2701]: W0515 12:52:30.830687 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:30.830775 kubelet[2701]: E0515 12:52:30.830698 2701 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:31.059215 containerd[1536]: time="2025-05-15T12:52:31.057896750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qxkgl,Uid:f69f6551-7509-4bf1-a40a-1170e25b66f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"e51c2b8cf5a3e81b67fe010150dff461cfefb91677bdd6c4499f51f57d3dcee2\"" May 15 12:52:31.060579 kubelet[2701]: E0515 12:52:31.060357 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:31.065023 containerd[1536]: time="2025-05-15T12:52:31.064969573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 12:52:31.083050 containerd[1536]: time="2025-05-15T12:52:31.083004910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59b79bbb46-9qqgw,Uid:3c985bdd-efe4-4486-b443-b9f4831fd9f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f06ab959585ba6d6d91e297ef8f98339fd635b48cf39bc18ecaa6b682a1116b\"" May 15 12:52:31.083735 kubelet[2701]: E0515 12:52:31.083715 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:32.199980 update_engine[1526]: I20250515 12:52:32.199867 1526 update_attempter.cc:509] Updating boot flags... May 15 12:52:32.407156 kubelet[2701]: E0515 12:52:32.407077 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:34.407797 kubelet[2701]: E0515 12:52:34.407737 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:35.889028 containerd[1536]: time="2025-05-15T12:52:35.888922379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:35.889836 containerd[1536]: time="2025-05-15T12:52:35.889806182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 15 12:52:35.890529 containerd[1536]: time="2025-05-15T12:52:35.890469263Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:35.892129 containerd[1536]: time="2025-05-15T12:52:35.892098518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:35.893295 containerd[1536]: time="2025-05-15T12:52:35.893264530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 4.828238127s" May 15 12:52:35.893450 containerd[1536]: time="2025-05-15T12:52:35.893295260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 12:52:35.896946 containerd[1536]: time="2025-05-15T12:52:35.896915679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 12:52:35.899031 containerd[1536]: time="2025-05-15T12:52:35.898997225Z" level=info msg="CreateContainer within sandbox \"e51c2b8cf5a3e81b67fe010150dff461cfefb91677bdd6c4499f51f57d3dcee2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 12:52:35.908084 containerd[1536]: time="2025-05-15T12:52:35.908040738Z" level=info msg="Container 7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:35.917534 containerd[1536]: time="2025-05-15T12:52:35.917494633Z" level=info msg="CreateContainer within sandbox \"e51c2b8cf5a3e81b67fe010150dff461cfefb91677bdd6c4499f51f57d3dcee2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e\"" May 15 12:52:35.918613 containerd[1536]: time="2025-05-15T12:52:35.918501705Z" level=info msg="StartContainer for \"7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e\"" May 15 12:52:35.922087 containerd[1536]: time="2025-05-15T12:52:35.922054474Z" level=info msg="connecting to shim 7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e" address="unix:///run/containerd/s/db215aaa64dd25f9aaecf2fa877acce3b58fc3006b91cf52e51663e75ec3ec90" protocol=ttrpc version=3 May 15 12:52:35.970589 systemd[1]: Started cri-containerd-7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e.scope - libcontainer container 7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e. May 15 12:52:36.026674 containerd[1536]: time="2025-05-15T12:52:36.026630247Z" level=info msg="StartContainer for \"7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e\" returns successfully" May 15 12:52:36.063755 systemd[1]: cri-containerd-7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e.scope: Deactivated successfully. May 15 12:52:36.066162 containerd[1536]: time="2025-05-15T12:52:36.066135433Z" level=info msg="received exit event container_id:\"7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e\" id:\"7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e\" pid:3296 exited_at:{seconds:1747313556 nanos:65832323}" May 15 12:52:36.067172 containerd[1536]: time="2025-05-15T12:52:36.067141215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e\" id:\"7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e\" pid:3296 exited_at:{seconds:1747313556 nanos:65832323}" May 15 12:52:36.098316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c7b86837b949f829e701b0712d0b8af33b77550ac2890cab18107acb7c4d01e-rootfs.mount: Deactivated successfully. May 15 12:52:36.407366 kubelet[2701]: E0515 12:52:36.407333 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:36.487704 kubelet[2701]: E0515 12:52:36.487655 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:38.408381 kubelet[2701]: E0515 12:52:38.407448 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:40.408518 kubelet[2701]: E0515 12:52:40.407680 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:41.225667 containerd[1536]: time="2025-05-15T12:52:41.225607622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:41.226699 containerd[1536]: time="2025-05-15T12:52:41.226579884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 15 12:52:41.227389 containerd[1536]: time="2025-05-15T12:52:41.227364366Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:41.229181 containerd[1536]: time="2025-05-15T12:52:41.229155919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:41.229712 containerd[1536]: time="2025-05-15T12:52:41.229678340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 5.332732611s" May 15 12:52:41.229808 containerd[1536]: time="2025-05-15T12:52:41.229793570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 12:52:41.232370 containerd[1536]: time="2025-05-15T12:52:41.232178894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 12:52:41.246969 containerd[1536]: time="2025-05-15T12:52:41.246947112Z" level=info msg="CreateContainer within sandbox \"4f06ab959585ba6d6d91e297ef8f98339fd635b48cf39bc18ecaa6b682a1116b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 12:52:41.252229 containerd[1536]: time="2025-05-15T12:52:41.252210222Z" level=info msg="Container 4380bf542965176c5f8d8394c1088b7046b6e880f11e1ca9f0d61db372e89abd: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:41.259561 containerd[1536]: time="2025-05-15T12:52:41.259527246Z" level=info msg="CreateContainer within sandbox \"4f06ab959585ba6d6d91e297ef8f98339fd635b48cf39bc18ecaa6b682a1116b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4380bf542965176c5f8d8394c1088b7046b6e880f11e1ca9f0d61db372e89abd\"" May 15 12:52:41.259949 containerd[1536]: time="2025-05-15T12:52:41.259926517Z" level=info msg="StartContainer for \"4380bf542965176c5f8d8394c1088b7046b6e880f11e1ca9f0d61db372e89abd\"" May 15 12:52:41.260982 containerd[1536]: time="2025-05-15T12:52:41.260959798Z" level=info msg="connecting to shim 4380bf542965176c5f8d8394c1088b7046b6e880f11e1ca9f0d61db372e89abd" address="unix:///run/containerd/s/c42bff00c58c2cd668476e3bbd9d51babc1b125b1fc8c621c3276a52373aec13" protocol=ttrpc version=3 May 15 12:52:41.334587 systemd[1]: Started cri-containerd-4380bf542965176c5f8d8394c1088b7046b6e880f11e1ca9f0d61db372e89abd.scope - libcontainer container 4380bf542965176c5f8d8394c1088b7046b6e880f11e1ca9f0d61db372e89abd. May 15 12:52:41.402258 containerd[1536]: time="2025-05-15T12:52:41.402211166Z" level=info msg="StartContainer for \"4380bf542965176c5f8d8394c1088b7046b6e880f11e1ca9f0d61db372e89abd\" returns successfully" May 15 12:52:41.503321 kubelet[2701]: E0515 12:52:41.503213 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:41.518225 kubelet[2701]: I0515 12:52:41.517444 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59b79bbb46-9qqgw" podStartSLOduration=1.371915389 podStartE2EDuration="11.517431334s" podCreationTimestamp="2025-05-15 12:52:30 +0000 UTC" firstStartedPulling="2025-05-15 12:52:31.085058206 +0000 UTC m=+12.778968468" lastFinishedPulling="2025-05-15 12:52:41.230574151 +0000 UTC m=+22.924484413" observedRunningTime="2025-05-15 12:52:41.514917859 +0000 UTC m=+23.208828131" watchObservedRunningTime="2025-05-15 12:52:41.517431334 +0000 UTC m=+23.211341596" May 15 12:52:42.408257 kubelet[2701]: E0515 12:52:42.407112 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:42.500204 kubelet[2701]: I0515 12:52:42.500172 2701 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:52:42.501630 kubelet[2701]: E0515 12:52:42.500544 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:44.409287 kubelet[2701]: E0515 12:52:44.407752 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:46.459482 kubelet[2701]: E0515 12:52:46.459336 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:48.468231 kubelet[2701]: E0515 12:52:48.468189 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:48.491482 containerd[1536]: time="2025-05-15T12:52:48.491058191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:48.491893 containerd[1536]: time="2025-05-15T12:52:48.491864941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 15 12:52:48.492133 containerd[1536]: time="2025-05-15T12:52:48.492114243Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:48.493608 containerd[1536]: time="2025-05-15T12:52:48.493582974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:48.494211 containerd[1536]: time="2025-05-15T12:52:48.494188906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 7.261973802s" May 15 12:52:48.494279 containerd[1536]: time="2025-05-15T12:52:48.494265076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 12:52:48.497602 containerd[1536]: time="2025-05-15T12:52:48.497580370Z" level=info msg="CreateContainer within sandbox \"e51c2b8cf5a3e81b67fe010150dff461cfefb91677bdd6c4499f51f57d3dcee2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 12:52:48.509683 containerd[1536]: time="2025-05-15T12:52:48.509659796Z" level=info msg="Container e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:48.521237 containerd[1536]: time="2025-05-15T12:52:48.521207503Z" level=info msg="CreateContainer within sandbox \"e51c2b8cf5a3e81b67fe010150dff461cfefb91677bdd6c4499f51f57d3dcee2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c\"" May 15 12:52:48.521625 containerd[1536]: time="2025-05-15T12:52:48.521585933Z" level=info msg="StartContainer for \"e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c\"" May 15 12:52:48.523572 containerd[1536]: time="2025-05-15T12:52:48.523546196Z" level=info msg="connecting to shim e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c" address="unix:///run/containerd/s/db215aaa64dd25f9aaecf2fa877acce3b58fc3006b91cf52e51663e75ec3ec90" protocol=ttrpc version=3 May 15 12:52:48.568576 systemd[1]: Started cri-containerd-e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c.scope - libcontainer container e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c. May 15 12:52:48.635641 containerd[1536]: time="2025-05-15T12:52:48.635591631Z" level=info msg="StartContainer for \"e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c\" returns successfully" May 15 12:52:49.522501 kubelet[2701]: E0515 12:52:49.522209 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:50.495511 kubelet[2701]: E0515 12:52:50.407792 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:50.526940 kubelet[2701]: E0515 12:52:50.526611 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:50.784306 containerd[1536]: time="2025-05-15T12:52:50.784030479Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:52:50.786931 systemd[1]: cri-containerd-e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c.scope: Deactivated successfully. May 15 12:52:50.787246 systemd[1]: cri-containerd-e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c.scope: Consumed 2.110s CPU time, 173.2M memory peak, 154M written to disk. May 15 12:52:50.788688 containerd[1536]: time="2025-05-15T12:52:50.788648095Z" level=info msg="received exit event container_id:\"e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c\" id:\"e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c\" pid:3391 exited_at:{seconds:1747313570 nanos:788198875}" May 15 12:52:50.788783 containerd[1536]: time="2025-05-15T12:52:50.788745365Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c\" id:\"e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c\" pid:3391 exited_at:{seconds:1747313570 nanos:788198875}" May 15 12:52:50.813361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e73f92b2142348573fba99475f4c107ad3bc0aa85d38cc41ca354d62b03a249c-rootfs.mount: Deactivated successfully. May 15 12:52:50.886474 kubelet[2701]: I0515 12:52:50.886242 2701 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 12:52:50.933741 systemd[1]: Created slice kubepods-burstable-podbe5ba310_2fcd_4763_b9cf_8dba85ce0f76.slice - libcontainer container kubepods-burstable-podbe5ba310_2fcd_4763_b9cf_8dba85ce0f76.slice. May 15 12:52:50.942182 kubelet[2701]: W0515 12:52:50.941942 2701 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:172-232-9-197" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-232-9-197' and this object May 15 12:52:50.942638 systemd[1]: Created slice kubepods-burstable-poda44ca807_6ed0_447f_9c4f_1a10e61b025b.slice - libcontainer container kubepods-burstable-poda44ca807_6ed0_447f_9c4f_1a10e61b025b.slice. May 15 12:52:50.945020 kubelet[2701]: E0515 12:52:50.944997 2701 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:172-232-9-197\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-232-9-197' and this object" logger="UnhandledError" May 15 12:52:50.960564 systemd[1]: Created slice kubepods-besteffort-pod9345d3c6_0d7e_4691_8b9a_ecd0176d0441.slice - libcontainer container kubepods-besteffort-pod9345d3c6_0d7e_4691_8b9a_ecd0176d0441.slice. May 15 12:52:50.968957 systemd[1]: Created slice kubepods-besteffort-pod970a8e31_5dc6_480a_b8f6_97944b700ab1.slice - libcontainer container kubepods-besteffort-pod970a8e31_5dc6_480a_b8f6_97944b700ab1.slice. May 15 12:52:50.976375 systemd[1]: Created slice kubepods-besteffort-pod91a29644_2dbd_46d3_a014_8444fea2be38.slice - libcontainer container kubepods-besteffort-pod91a29644_2dbd_46d3_a014_8444fea2be38.slice. May 15 12:52:51.024773 kubelet[2701]: I0515 12:52:51.024704 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be5ba310-2fcd-4763-b9cf-8dba85ce0f76-config-volume\") pod \"coredns-6f6b679f8f-hfrjd\" (UID: \"be5ba310-2fcd-4763-b9cf-8dba85ce0f76\") " pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:52:51.024773 kubelet[2701]: I0515 12:52:51.024737 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfz6s\" (UniqueName: \"kubernetes.io/projected/be5ba310-2fcd-4763-b9cf-8dba85ce0f76-kube-api-access-kfz6s\") pod \"coredns-6f6b679f8f-hfrjd\" (UID: \"be5ba310-2fcd-4763-b9cf-8dba85ce0f76\") " pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:52:51.125524 kubelet[2701]: I0515 12:52:51.125422 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a44ca807-6ed0-447f-9c4f-1a10e61b025b-config-volume\") pod \"coredns-6f6b679f8f-svgwx\" (UID: \"a44ca807-6ed0-447f-9c4f-1a10e61b025b\") " pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:52:51.125743 kubelet[2701]: I0515 12:52:51.125564 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g28wm\" (UniqueName: \"kubernetes.io/projected/a44ca807-6ed0-447f-9c4f-1a10e61b025b-kube-api-access-g28wm\") pod \"coredns-6f6b679f8f-svgwx\" (UID: \"a44ca807-6ed0-447f-9c4f-1a10e61b025b\") " pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:52:51.125743 kubelet[2701]: I0515 12:52:51.125617 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8k9h\" (UniqueName: \"kubernetes.io/projected/970a8e31-5dc6-480a-b8f6-97944b700ab1-kube-api-access-s8k9h\") pod \"calico-apiserver-7959dc45d5-rx2qm\" (UID: \"970a8e31-5dc6-480a-b8f6-97944b700ab1\") " pod="calico-apiserver/calico-apiserver-7959dc45d5-rx2qm" May 15 12:52:51.125743 kubelet[2701]: I0515 12:52:51.125652 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhfc\" (UniqueName: \"kubernetes.io/projected/9345d3c6-0d7e-4691-8b9a-ecd0176d0441-kube-api-access-xdhfc\") pod \"calico-kube-controllers-6cff7bc5b6-sff94\" (UID: \"9345d3c6-0d7e-4691-8b9a-ecd0176d0441\") " pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:52:51.125743 kubelet[2701]: I0515 12:52:51.125689 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91a29644-2dbd-46d3-a014-8444fea2be38-calico-apiserver-certs\") pod \"calico-apiserver-7959dc45d5-ldnxl\" (UID: \"91a29644-2dbd-46d3-a014-8444fea2be38\") " pod="calico-apiserver/calico-apiserver-7959dc45d5-ldnxl" May 15 12:52:51.125743 kubelet[2701]: I0515 12:52:51.125723 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzgqq\" (UniqueName: \"kubernetes.io/projected/91a29644-2dbd-46d3-a014-8444fea2be38-kube-api-access-nzgqq\") pod \"calico-apiserver-7959dc45d5-ldnxl\" (UID: \"91a29644-2dbd-46d3-a014-8444fea2be38\") " pod="calico-apiserver/calico-apiserver-7959dc45d5-ldnxl" May 15 12:52:51.125879 kubelet[2701]: I0515 12:52:51.125763 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9345d3c6-0d7e-4691-8b9a-ecd0176d0441-tigera-ca-bundle\") pod \"calico-kube-controllers-6cff7bc5b6-sff94\" (UID: \"9345d3c6-0d7e-4691-8b9a-ecd0176d0441\") " pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:52:51.125879 kubelet[2701]: I0515 12:52:51.125791 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/970a8e31-5dc6-480a-b8f6-97944b700ab1-calico-apiserver-certs\") pod \"calico-apiserver-7959dc45d5-rx2qm\" (UID: \"970a8e31-5dc6-480a-b8f6-97944b700ab1\") " pod="calico-apiserver/calico-apiserver-7959dc45d5-rx2qm" May 15 12:52:51.270108 containerd[1536]: time="2025-05-15T12:52:51.270067838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:52:51.274072 containerd[1536]: time="2025-05-15T12:52:51.273990682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7959dc45d5-rx2qm,Uid:970a8e31-5dc6-480a-b8f6-97944b700ab1,Namespace:calico-apiserver,Attempt:0,}" May 15 12:52:51.282503 containerd[1536]: time="2025-05-15T12:52:51.281986432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7959dc45d5-ldnxl,Uid:91a29644-2dbd-46d3-a014-8444fea2be38,Namespace:calico-apiserver,Attempt:0,}" May 15 12:52:51.370693 containerd[1536]: time="2025-05-15T12:52:51.370642552Z" level=error msg="Failed to destroy network for sandbox \"798665676fea4608e92bcfafea6f138fae6ef9417c173c258abd2bda50f875e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.372595 containerd[1536]: time="2025-05-15T12:52:51.372545944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7959dc45d5-ldnxl,Uid:91a29644-2dbd-46d3-a014-8444fea2be38,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"798665676fea4608e92bcfafea6f138fae6ef9417c173c258abd2bda50f875e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.372966 kubelet[2701]: E0515 12:52:51.372882 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"798665676fea4608e92bcfafea6f138fae6ef9417c173c258abd2bda50f875e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.373277 kubelet[2701]: E0515 12:52:51.372996 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"798665676fea4608e92bcfafea6f138fae6ef9417c173c258abd2bda50f875e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7959dc45d5-ldnxl" May 15 12:52:51.373277 kubelet[2701]: E0515 12:52:51.373230 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"798665676fea4608e92bcfafea6f138fae6ef9417c173c258abd2bda50f875e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7959dc45d5-ldnxl" May 15 12:52:51.373357 kubelet[2701]: E0515 12:52:51.373304 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7959dc45d5-ldnxl_calico-apiserver(91a29644-2dbd-46d3-a014-8444fea2be38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7959dc45d5-ldnxl_calico-apiserver(91a29644-2dbd-46d3-a014-8444fea2be38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"798665676fea4608e92bcfafea6f138fae6ef9417c173c258abd2bda50f875e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7959dc45d5-ldnxl" podUID="91a29644-2dbd-46d3-a014-8444fea2be38" May 15 12:52:51.404958 containerd[1536]: time="2025-05-15T12:52:51.404439313Z" level=error msg="Failed to destroy network for sandbox \"42052323bc66c92ea6d9eec82fcf9643d55a4964ef80f999e39363cc6881eed0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.407150 containerd[1536]: time="2025-05-15T12:52:51.406902146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42052323bc66c92ea6d9eec82fcf9643d55a4964ef80f999e39363cc6881eed0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.407320 kubelet[2701]: E0515 12:52:51.407279 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42052323bc66c92ea6d9eec82fcf9643d55a4964ef80f999e39363cc6881eed0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.407372 kubelet[2701]: E0515 12:52:51.407336 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42052323bc66c92ea6d9eec82fcf9643d55a4964ef80f999e39363cc6881eed0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:52:51.407372 kubelet[2701]: E0515 12:52:51.407355 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42052323bc66c92ea6d9eec82fcf9643d55a4964ef80f999e39363cc6881eed0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:52:51.407419 kubelet[2701]: E0515 12:52:51.407387 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42052323bc66c92ea6d9eec82fcf9643d55a4964ef80f999e39363cc6881eed0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:52:51.421693 containerd[1536]: time="2025-05-15T12:52:51.421650184Z" level=error msg="Failed to destroy network for sandbox \"c679e85fd3efc3a36b80b3550e482386e86884c2e83c5eef09037e2df83434a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.423346 containerd[1536]: time="2025-05-15T12:52:51.423302946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7959dc45d5-rx2qm,Uid:970a8e31-5dc6-480a-b8f6-97944b700ab1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c679e85fd3efc3a36b80b3550e482386e86884c2e83c5eef09037e2df83434a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.423502 kubelet[2701]: E0515 12:52:51.423474 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c679e85fd3efc3a36b80b3550e482386e86884c2e83c5eef09037e2df83434a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.423548 kubelet[2701]: E0515 12:52:51.423515 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c679e85fd3efc3a36b80b3550e482386e86884c2e83c5eef09037e2df83434a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7959dc45d5-rx2qm" May 15 12:52:51.423548 kubelet[2701]: E0515 12:52:51.423533 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c679e85fd3efc3a36b80b3550e482386e86884c2e83c5eef09037e2df83434a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7959dc45d5-rx2qm" May 15 12:52:51.423609 kubelet[2701]: E0515 12:52:51.423577 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7959dc45d5-rx2qm_calico-apiserver(970a8e31-5dc6-480a-b8f6-97944b700ab1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7959dc45d5-rx2qm_calico-apiserver(970a8e31-5dc6-480a-b8f6-97944b700ab1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c679e85fd3efc3a36b80b3550e482386e86884c2e83c5eef09037e2df83434a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7959dc45d5-rx2qm" podUID="970a8e31-5dc6-480a-b8f6-97944b700ab1" May 15 12:52:51.530353 kubelet[2701]: E0515 12:52:51.530311 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:51.531679 containerd[1536]: time="2025-05-15T12:52:51.531612409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 12:52:51.841316 kubelet[2701]: E0515 12:52:51.841283 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:51.842426 containerd[1536]: time="2025-05-15T12:52:51.842022481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:52:51.862467 kubelet[2701]: E0515 12:52:51.852439 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:51.863097 containerd[1536]: time="2025-05-15T12:52:51.863060227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:52:51.938151 containerd[1536]: time="2025-05-15T12:52:51.937814709Z" level=error msg="Failed to destroy network for sandbox \"7882f59834b02eff639e8d78b52551d86c6d2ad2e8c84f36bfeffa003a8008e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.942108 systemd[1]: run-netns-cni\x2d1fd497ae\x2d2831\x2d083f\x2d5dde\x2d8c3c84c84dbc.mount: Deactivated successfully. May 15 12:52:51.944342 containerd[1536]: time="2025-05-15T12:52:51.943830206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7882f59834b02eff639e8d78b52551d86c6d2ad2e8c84f36bfeffa003a8008e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.944421 kubelet[2701]: E0515 12:52:51.944333 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7882f59834b02eff639e8d78b52551d86c6d2ad2e8c84f36bfeffa003a8008e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.944536 kubelet[2701]: E0515 12:52:51.944420 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7882f59834b02eff639e8d78b52551d86c6d2ad2e8c84f36bfeffa003a8008e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:52:51.944536 kubelet[2701]: E0515 12:52:51.944444 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7882f59834b02eff639e8d78b52551d86c6d2ad2e8c84f36bfeffa003a8008e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:52:51.944822 kubelet[2701]: E0515 12:52:51.944527 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7882f59834b02eff639e8d78b52551d86c6d2ad2e8c84f36bfeffa003a8008e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfrjd" podUID="be5ba310-2fcd-4763-b9cf-8dba85ce0f76" May 15 12:52:51.950360 containerd[1536]: time="2025-05-15T12:52:51.950307094Z" level=error msg="Failed to destroy network for sandbox \"f10b1637aa571dd137a18e4a6627be1ff742e93327439da7dca54fd2007f94e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.952224 containerd[1536]: time="2025-05-15T12:52:51.952192896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10b1637aa571dd137a18e4a6627be1ff742e93327439da7dca54fd2007f94e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.952443 kubelet[2701]: E0515 12:52:51.952335 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10b1637aa571dd137a18e4a6627be1ff742e93327439da7dca54fd2007f94e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:51.952443 kubelet[2701]: E0515 12:52:51.952382 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10b1637aa571dd137a18e4a6627be1ff742e93327439da7dca54fd2007f94e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:52:51.952443 kubelet[2701]: E0515 12:52:51.952407 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10b1637aa571dd137a18e4a6627be1ff742e93327439da7dca54fd2007f94e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:52:51.953232 kubelet[2701]: E0515 12:52:51.952440 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f10b1637aa571dd137a18e4a6627be1ff742e93327439da7dca54fd2007f94e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-svgwx" podUID="a44ca807-6ed0-447f-9c4f-1a10e61b025b" May 15 12:52:51.953298 systemd[1]: run-netns-cni\x2d045953e0\x2d2cea\x2d6cf7\x2d361c\x2d940a8d440cbf.mount: Deactivated successfully. May 15 12:52:52.412515 kubelet[2701]: I0515 12:52:52.412439 2701 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:52:52.413490 kubelet[2701]: E0515 12:52:52.413422 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:52.417862 systemd[1]: Created slice kubepods-besteffort-pod30c809cf_5d96_45fe_9af3_2a80162d2f28.slice - libcontainer container kubepods-besteffort-pod30c809cf_5d96_45fe_9af3_2a80162d2f28.slice. May 15 12:52:52.421602 containerd[1536]: time="2025-05-15T12:52:52.421541275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:52:52.534837 kubelet[2701]: E0515 12:52:52.532348 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:52:52.866983 containerd[1536]: time="2025-05-15T12:52:52.866929432Z" level=error msg="Failed to destroy network for sandbox \"64e3251b6678a06f03e930466186a538892137df8c5f6eb6fe990cccb3f18ced\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:52.871113 systemd[1]: run-netns-cni\x2dd479691c\x2de164\x2d60ad\x2d47f2\x2db10a61b932d5.mount: Deactivated successfully. May 15 12:52:52.872916 containerd[1536]: time="2025-05-15T12:52:52.872859019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64e3251b6678a06f03e930466186a538892137df8c5f6eb6fe990cccb3f18ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:52.873308 kubelet[2701]: E0515 12:52:52.873270 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64e3251b6678a06f03e930466186a538892137df8c5f6eb6fe990cccb3f18ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:52:52.873352 kubelet[2701]: E0515 12:52:52.873321 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64e3251b6678a06f03e930466186a538892137df8c5f6eb6fe990cccb3f18ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:52:52.873352 kubelet[2701]: E0515 12:52:52.873340 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64e3251b6678a06f03e930466186a538892137df8c5f6eb6fe990cccb3f18ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:52:52.873394 kubelet[2701]: E0515 12:52:52.873375 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64e3251b6678a06f03e930466186a538892137df8c5f6eb6fe990cccb3f18ced\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:52:57.967291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992633498.mount: Deactivated successfully. May 15 12:52:57.968057 containerd[1536]: time="2025-05-15T12:52:57.967654240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3992633498: write /var/lib/containerd/tmpmounts/containerd-mount3992633498/usr/lib/calico/bpf/from_nat_info.o: no space left on device" May 15 12:52:57.968057 containerd[1536]: time="2025-05-15T12:52:57.967728110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 12:52:57.969280 kubelet[2701]: E0515 12:52:57.968720 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3992633498: write /var/lib/containerd/tmpmounts/containerd-mount3992633498/usr/lib/calico/bpf/from_nat_info.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 12:52:57.969607 kubelet[2701]: E0515 12:52:57.969246 2701 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3992633498: write /var/lib/containerd/tmpmounts/containerd-mount3992633498/usr/lib/calico/bpf/from_nat_info.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 12:52:57.970151 kubelet[2701]: E0515 12:52:57.969949 2701 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zm75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-qxkgl_calico-system(f69f6551-7509-4bf1-a40a-1170e25b66f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3992633498: write /var/lib/containerd/tmpmounts/containerd-mount3992633498/usr/lib/calico/bpf/from_nat_info.o: no space left on device" logger="UnhandledError" May 15 12:52:57.972122 kubelet[2701]: E0515 12:52:57.971941 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3992633498: write /var/lib/containerd/tmpmounts/containerd-mount3992633498/usr/lib/calico/bpf/from_nat_info.o: no space left on device\"" pod="calico-system/calico-node-qxkgl" podUID="f69f6551-7509-4bf1-a40a-1170e25b66f0" May 15 12:52:58.663409 kubelet[2701]: I0515 12:52:58.663368 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:52:58.663409 kubelet[2701]: I0515 12:52:58.663414 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:52:58.665885 kubelet[2701]: I0515 12:52:58.665867 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:52:58.675001 kubelet[2701]: I0515 12:52:58.674984 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:52:58.675068 kubelet[2701]: I0515 12:52:58.675045 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-7959dc45d5-ldnxl","calico-apiserver/calico-apiserver-7959dc45d5-rx2qm","calico-system/calico-kube-controllers-6cff7bc5b6-sff94","kube-system/coredns-6f6b679f8f-hfrjd","kube-system/coredns-6f6b679f8f-svgwx","calico-system/calico-node-qxkgl","calico-system/csi-node-driver-d7hrh","tigera-operator/tigera-operator-6f6897fdc5-8rg8f","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:52:58.679413 kubelet[2701]: I0515 12:52:58.679383 2701 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-7959dc45d5-ldnxl" May 15 12:52:58.679413 kubelet[2701]: I0515 12:52:58.679398 2701 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-7959dc45d5-ldnxl"] May 15 12:52:58.698117 kubelet[2701]: I0515 12:52:58.698054 2701 kubelet.go:2306] "Pod admission denied" podUID="eb102aba-da9b-45fc-9513-c953d377992f" pod="calico-apiserver/calico-apiserver-7959dc45d5-5llpx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:58.724763 kubelet[2701]: I0515 12:52:58.724653 2701 kubelet.go:2306] "Pod admission denied" podUID="6198e39a-1d9b-425c-af8e-a755b691b395" pod="calico-apiserver/calico-apiserver-7959dc45d5-l44dg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:58.750975 kubelet[2701]: I0515 12:52:58.750934 2701 kubelet.go:2306] "Pod admission denied" podUID="76af114f-0505-40a7-92c2-eb9dee14a285" pod="calico-apiserver/calico-apiserver-7959dc45d5-lhm2j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:58.775260 kubelet[2701]: I0515 12:52:58.775008 2701 kubelet.go:2306] "Pod admission denied" podUID="7f3e6ef7-f7f5-486f-b003-3ddedcddda86" pod="calico-apiserver/calico-apiserver-7959dc45d5-mk46x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:58.794400 kubelet[2701]: I0515 12:52:58.794352 2701 kubelet.go:2306] "Pod admission denied" podUID="c3c204a5-8b6a-4f34-bda8-b1428629e2da" pod="calico-apiserver/calico-apiserver-7959dc45d5-chbsk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:58.815237 kubelet[2701]: I0515 12:52:58.815175 2701 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzgqq\" (UniqueName: \"kubernetes.io/projected/91a29644-2dbd-46d3-a014-8444fea2be38-kube-api-access-nzgqq\") pod \"91a29644-2dbd-46d3-a014-8444fea2be38\" (UID: \"91a29644-2dbd-46d3-a014-8444fea2be38\") " May 15 12:52:58.816156 kubelet[2701]: I0515 12:52:58.815900 2701 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91a29644-2dbd-46d3-a014-8444fea2be38-calico-apiserver-certs\") pod \"91a29644-2dbd-46d3-a014-8444fea2be38\" (UID: \"91a29644-2dbd-46d3-a014-8444fea2be38\") " May 15 12:52:58.820530 kubelet[2701]: I0515 12:52:58.820003 2701 kubelet.go:2306] "Pod admission denied" podUID="f5ea5a21-9484-47f0-ab2d-3738eb44edb9" pod="calico-apiserver/calico-apiserver-7959dc45d5-66z78" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:58.826519 kubelet[2701]: I0515 12:52:58.826476 2701 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a29644-2dbd-46d3-a014-8444fea2be38-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "91a29644-2dbd-46d3-a014-8444fea2be38" (UID: "91a29644-2dbd-46d3-a014-8444fea2be38"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 12:52:58.826646 kubelet[2701]: I0515 12:52:58.826631 2701 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a29644-2dbd-46d3-a014-8444fea2be38-kube-api-access-nzgqq" (OuterVolumeSpecName: "kube-api-access-nzgqq") pod "91a29644-2dbd-46d3-a014-8444fea2be38" (UID: "91a29644-2dbd-46d3-a014-8444fea2be38"). InnerVolumeSpecName "kube-api-access-nzgqq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:52:58.827028 systemd[1]: var-lib-kubelet-pods-91a29644\x2d2dbd\x2d46d3\x2da014\x2d8444fea2be38-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnzgqq.mount: Deactivated successfully. May 15 12:52:58.832250 systemd[1]: var-lib-kubelet-pods-91a29644\x2d2dbd\x2d46d3\x2da014\x2d8444fea2be38-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 12:52:58.855894 kubelet[2701]: I0515 12:52:58.855821 2701 kubelet.go:2306] "Pod admission denied" podUID="d533f487-3d5d-4065-9405-d020a392f654" pod="calico-apiserver/calico-apiserver-7959dc45d5-mcv98" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:58.890623 kubelet[2701]: I0515 12:52:58.890565 2701 kubelet.go:2306] "Pod admission denied" podUID="59d675c2-f97b-4cdf-a0c5-7ae3d5ff1cc6" pod="calico-apiserver/calico-apiserver-7959dc45d5-zjx44" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:58.911903 kubelet[2701]: I0515 12:52:58.911863 2701 kubelet.go:2306] "Pod admission denied" podUID="93a32ef0-5759-4b22-aaa4-8f394334cd13" pod="calico-apiserver/calico-apiserver-7959dc45d5-rg4vl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:58.916431 kubelet[2701]: I0515 12:52:58.916264 2701 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91a29644-2dbd-46d3-a014-8444fea2be38-calico-apiserver-certs\") on node \"172-232-9-197\" DevicePath \"\"" May 15 12:52:58.916431 kubelet[2701]: I0515 12:52:58.916288 2701 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nzgqq\" (UniqueName: \"kubernetes.io/projected/91a29644-2dbd-46d3-a014-8444fea2be38-kube-api-access-nzgqq\") on node \"172-232-9-197\" DevicePath \"\"" May 15 12:52:59.049326 kubelet[2701]: I0515 12:52:59.049266 2701 kubelet.go:2306] "Pod admission denied" podUID="80f15c25-fb4b-41aa-9153-10c9c7af1f5c" pod="calico-apiserver/calico-apiserver-7959dc45d5-frfxs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:59.302343 kubelet[2701]: I0515 12:52:59.302002 2701 kubelet.go:2306] "Pod admission denied" podUID="83f0822d-6d69-42f6-ab03-bbf8f18fd1f3" pod="calico-apiserver/calico-apiserver-7959dc45d5-v6jl9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:52:59.553794 systemd[1]: Removed slice kubepods-besteffort-pod91a29644_2dbd_46d3_a014_8444fea2be38.slice - libcontainer container kubepods-besteffort-pod91a29644_2dbd_46d3_a014_8444fea2be38.slice. May 15 12:52:59.680509 kubelet[2701]: I0515 12:52:59.680376 2701 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-7959dc45d5-ldnxl"] May 15 12:53:02.408656 containerd[1536]: time="2025-05-15T12:53:02.408353383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7959dc45d5-rx2qm,Uid:970a8e31-5dc6-480a-b8f6-97944b700ab1,Namespace:calico-apiserver,Attempt:0,}" May 15 12:53:02.483415 containerd[1536]: time="2025-05-15T12:53:02.483373097Z" level=error msg="Failed to destroy network for sandbox \"49297cd3a40c63911d907bd48cdeff6251d5d7f94d7bca5253d2c2c1787f76c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:02.486326 systemd[1]: run-netns-cni\x2d4f990e19\x2d2011\x2d73c5\x2d10ee\x2d80e845000b7d.mount: Deactivated successfully. May 15 12:53:02.486711 containerd[1536]: time="2025-05-15T12:53:02.486664860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7959dc45d5-rx2qm,Uid:970a8e31-5dc6-480a-b8f6-97944b700ab1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49297cd3a40c63911d907bd48cdeff6251d5d7f94d7bca5253d2c2c1787f76c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:02.487013 kubelet[2701]: E0515 12:53:02.486970 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49297cd3a40c63911d907bd48cdeff6251d5d7f94d7bca5253d2c2c1787f76c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:02.487316 kubelet[2701]: E0515 12:53:02.487047 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49297cd3a40c63911d907bd48cdeff6251d5d7f94d7bca5253d2c2c1787f76c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7959dc45d5-rx2qm" May 15 12:53:02.487316 kubelet[2701]: E0515 12:53:02.487072 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49297cd3a40c63911d907bd48cdeff6251d5d7f94d7bca5253d2c2c1787f76c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7959dc45d5-rx2qm" May 15 12:53:02.487631 kubelet[2701]: E0515 12:53:02.487593 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7959dc45d5-rx2qm_calico-apiserver(970a8e31-5dc6-480a-b8f6-97944b700ab1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7959dc45d5-rx2qm_calico-apiserver(970a8e31-5dc6-480a-b8f6-97944b700ab1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49297cd3a40c63911d907bd48cdeff6251d5d7f94d7bca5253d2c2c1787f76c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7959dc45d5-rx2qm" podUID="970a8e31-5dc6-480a-b8f6-97944b700ab1" May 15 12:53:04.409023 kubelet[2701]: E0515 12:53:04.407769 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:04.409482 containerd[1536]: time="2025-05-15T12:53:04.408922309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:53:04.410350 kubelet[2701]: E0515 12:53:04.408450 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:04.410908 containerd[1536]: time="2025-05-15T12:53:04.410687291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:53:04.411389 containerd[1536]: time="2025-05-15T12:53:04.411060541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:53:04.518166 containerd[1536]: time="2025-05-15T12:53:04.518075488Z" level=error msg="Failed to destroy network for sandbox \"231fd9eaf80eac06938296d87953638252475b5d8ad4b46c891bb5c746f5fdbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:04.519002 containerd[1536]: time="2025-05-15T12:53:04.518090608Z" level=error msg="Failed to destroy network for sandbox \"544d697930d3694607273c042a0921f908696eacb21c2b7f3323eb2b716105b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:04.520701 systemd[1]: run-netns-cni\x2d8e0fdcaa\x2d02c0\x2ddb5c\x2d2732\x2da602fdaa926a.mount: Deactivated successfully. May 15 12:53:04.521355 containerd[1536]: time="2025-05-15T12:53:04.520908470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"231fd9eaf80eac06938296d87953638252475b5d8ad4b46c891bb5c746f5fdbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:04.522653 kubelet[2701]: E0515 12:53:04.521730 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"231fd9eaf80eac06938296d87953638252475b5d8ad4b46c891bb5c746f5fdbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:04.522653 kubelet[2701]: E0515 12:53:04.521799 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"231fd9eaf80eac06938296d87953638252475b5d8ad4b46c891bb5c746f5fdbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:04.522653 kubelet[2701]: E0515 12:53:04.521819 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"231fd9eaf80eac06938296d87953638252475b5d8ad4b46c891bb5c746f5fdbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:04.522653 kubelet[2701]: E0515 12:53:04.521864 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"231fd9eaf80eac06938296d87953638252475b5d8ad4b46c891bb5c746f5fdbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:53:04.525473 systemd[1]: run-netns-cni\x2d2882c146\x2d52ce\x2d882d\x2d69c2\x2d1280488e117b.mount: Deactivated successfully. May 15 12:53:04.526610 containerd[1536]: time="2025-05-15T12:53:04.525709235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"544d697930d3694607273c042a0921f908696eacb21c2b7f3323eb2b716105b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:04.526943 kubelet[2701]: E0515 12:53:04.526822 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"544d697930d3694607273c042a0921f908696eacb21c2b7f3323eb2b716105b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:04.526943 kubelet[2701]: E0515 12:53:04.526848 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"544d697930d3694607273c042a0921f908696eacb21c2b7f3323eb2b716105b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:04.526943 kubelet[2701]: E0515 12:53:04.526863 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"544d697930d3694607273c042a0921f908696eacb21c2b7f3323eb2b716105b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:04.527503 kubelet[2701]: E0515 12:53:04.527007 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"544d697930d3694607273c042a0921f908696eacb21c2b7f3323eb2b716105b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-svgwx" podUID="a44ca807-6ed0-447f-9c4f-1a10e61b025b" May 15 12:53:04.527584 containerd[1536]: time="2025-05-15T12:53:04.527329976Z" level=error msg="Failed to destroy network for sandbox \"37f6cdf184aadcd89eaf9e7150bb86fa36ad15e121d53c0f49c97b0b3caa553d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:04.528205 containerd[1536]: time="2025-05-15T12:53:04.528164466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f6cdf184aadcd89eaf9e7150bb86fa36ad15e121d53c0f49c97b0b3caa553d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:04.528561 kubelet[2701]: E0515 12:53:04.528535 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f6cdf184aadcd89eaf9e7150bb86fa36ad15e121d53c0f49c97b0b3caa553d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:04.528623 kubelet[2701]: E0515 12:53:04.528560 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f6cdf184aadcd89eaf9e7150bb86fa36ad15e121d53c0f49c97b0b3caa553d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:04.528623 kubelet[2701]: E0515 12:53:04.528575 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37f6cdf184aadcd89eaf9e7150bb86fa36ad15e121d53c0f49c97b0b3caa553d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:04.528623 kubelet[2701]: E0515 12:53:04.528598 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37f6cdf184aadcd89eaf9e7150bb86fa36ad15e121d53c0f49c97b0b3caa553d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfrjd" podUID="be5ba310-2fcd-4763-b9cf-8dba85ce0f76" May 15 12:53:05.407787 containerd[1536]: time="2025-05-15T12:53:05.407699085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:53:05.416908 systemd[1]: run-netns-cni\x2db8aa63b2\x2df3e8\x2dccd0\x2d8eaa\x2ddbe26f16b5be.mount: Deactivated successfully. May 15 12:53:05.464404 containerd[1536]: time="2025-05-15T12:53:05.464347541Z" level=error msg="Failed to destroy network for sandbox \"c5034c14543f2e0a163232fa1607761252559423e8c051c3a4fe17a32edfa937\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:05.466337 systemd[1]: run-netns-cni\x2d62169da2\x2da64d\x2dc4a5\x2de569\x2d87f9abfc0d03.mount: Deactivated successfully. May 15 12:53:05.468191 containerd[1536]: time="2025-05-15T12:53:05.467632823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5034c14543f2e0a163232fa1607761252559423e8c051c3a4fe17a32edfa937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:05.468267 kubelet[2701]: E0515 12:53:05.467873 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5034c14543f2e0a163232fa1607761252559423e8c051c3a4fe17a32edfa937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:05.468267 kubelet[2701]: E0515 12:53:05.467939 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5034c14543f2e0a163232fa1607761252559423e8c051c3a4fe17a32edfa937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:05.468267 kubelet[2701]: E0515 12:53:05.467957 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5034c14543f2e0a163232fa1607761252559423e8c051c3a4fe17a32edfa937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:05.468267 kubelet[2701]: E0515 12:53:05.467995 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5034c14543f2e0a163232fa1607761252559423e8c051c3a4fe17a32edfa937\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:53:08.409618 kubelet[2701]: E0515 12:53:08.408252 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:08.410222 containerd[1536]: time="2025-05-15T12:53:08.410097600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 12:53:09.711337 kubelet[2701]: I0515 12:53:09.711306 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:53:09.711337 kubelet[2701]: I0515 12:53:09.711342 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:53:09.713804 kubelet[2701]: I0515 12:53:09.713791 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:53:09.724878 kubelet[2701]: I0515 12:53:09.724845 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:53:09.724968 kubelet[2701]: I0515 12:53:09.724937 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-7959dc45d5-rx2qm","calico-system/calico-kube-controllers-6cff7bc5b6-sff94","kube-system/coredns-6f6b679f8f-hfrjd","kube-system/coredns-6f6b679f8f-svgwx","calico-system/csi-node-driver-d7hrh","calico-system/calico-node-qxkgl","tigera-operator/tigera-operator-6f6897fdc5-8rg8f","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:53:09.731327 kubelet[2701]: I0515 12:53:09.731296 2701 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-7959dc45d5-rx2qm" May 15 12:53:09.731327 kubelet[2701]: I0515 12:53:09.731312 2701 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-7959dc45d5-rx2qm"] May 15 12:53:09.882109 kubelet[2701]: I0515 12:53:09.881618 2701 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8k9h\" (UniqueName: \"kubernetes.io/projected/970a8e31-5dc6-480a-b8f6-97944b700ab1-kube-api-access-s8k9h\") pod \"970a8e31-5dc6-480a-b8f6-97944b700ab1\" (UID: \"970a8e31-5dc6-480a-b8f6-97944b700ab1\") " May 15 12:53:09.882109 kubelet[2701]: I0515 12:53:09.881687 2701 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/970a8e31-5dc6-480a-b8f6-97944b700ab1-calico-apiserver-certs\") pod \"970a8e31-5dc6-480a-b8f6-97944b700ab1\" (UID: \"970a8e31-5dc6-480a-b8f6-97944b700ab1\") " May 15 12:53:09.887679 systemd[1]: var-lib-kubelet-pods-970a8e31\x2d5dc6\x2d480a\x2db8f6\x2d97944b700ab1-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 12:53:09.890390 kubelet[2701]: I0515 12:53:09.889593 2701 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970a8e31-5dc6-480a-b8f6-97944b700ab1-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "970a8e31-5dc6-480a-b8f6-97944b700ab1" (UID: "970a8e31-5dc6-480a-b8f6-97944b700ab1"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 12:53:09.890390 kubelet[2701]: I0515 12:53:09.890341 2701 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970a8e31-5dc6-480a-b8f6-97944b700ab1-kube-api-access-s8k9h" (OuterVolumeSpecName: "kube-api-access-s8k9h") pod "970a8e31-5dc6-480a-b8f6-97944b700ab1" (UID: "970a8e31-5dc6-480a-b8f6-97944b700ab1"). InnerVolumeSpecName "kube-api-access-s8k9h". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:53:09.890623 systemd[1]: var-lib-kubelet-pods-970a8e31\x2d5dc6\x2d480a\x2db8f6\x2d97944b700ab1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds8k9h.mount: Deactivated successfully. May 15 12:53:09.982089 kubelet[2701]: I0515 12:53:09.981951 2701 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-s8k9h\" (UniqueName: \"kubernetes.io/projected/970a8e31-5dc6-480a-b8f6-97944b700ab1-kube-api-access-s8k9h\") on node \"172-232-9-197\" DevicePath \"\"" May 15 12:53:09.982089 kubelet[2701]: I0515 12:53:09.981988 2701 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/970a8e31-5dc6-480a-b8f6-97944b700ab1-calico-apiserver-certs\") on node \"172-232-9-197\" DevicePath \"\"" May 15 12:53:10.415204 systemd[1]: Removed slice kubepods-besteffort-pod970a8e31_5dc6_480a_b8f6_97944b700ab1.slice - libcontainer container kubepods-besteffort-pod970a8e31_5dc6_480a_b8f6_97944b700ab1.slice. May 15 12:53:10.732021 kubelet[2701]: I0515 12:53:10.731873 2701 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-7959dc45d5-rx2qm"] May 15 12:53:15.406982 kubelet[2701]: E0515 12:53:15.406883 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:15.408839 containerd[1536]: time="2025-05-15T12:53:15.408795969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:53:15.536509 containerd[1536]: time="2025-05-15T12:53:15.536436072Z" level=error msg="Failed to destroy network for sandbox \"9331903a838cf9ee68525e55fdd72ea0fa53ef730783b3c9d8eee121eaa679b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:15.540017 systemd[1]: run-netns-cni\x2d002653ec\x2d78c5\x2d07aa\x2dafd5\x2d6e3751052434.mount: Deactivated successfully. May 15 12:53:15.541133 containerd[1536]: time="2025-05-15T12:53:15.541001175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9331903a838cf9ee68525e55fdd72ea0fa53ef730783b3c9d8eee121eaa679b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:15.541601 kubelet[2701]: E0515 12:53:15.541565 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9331903a838cf9ee68525e55fdd72ea0fa53ef730783b3c9d8eee121eaa679b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:15.542137 kubelet[2701]: E0515 12:53:15.541615 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9331903a838cf9ee68525e55fdd72ea0fa53ef730783b3c9d8eee121eaa679b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:15.542137 kubelet[2701]: E0515 12:53:15.541757 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9331903a838cf9ee68525e55fdd72ea0fa53ef730783b3c9d8eee121eaa679b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:15.543030 kubelet[2701]: E0515 12:53:15.542749 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9331903a838cf9ee68525e55fdd72ea0fa53ef730783b3c9d8eee121eaa679b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfrjd" podUID="be5ba310-2fcd-4763-b9cf-8dba85ce0f76" May 15 12:53:16.410998 containerd[1536]: time="2025-05-15T12:53:16.410951320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:53:16.491971 containerd[1536]: time="2025-05-15T12:53:16.491923953Z" level=error msg="Failed to destroy network for sandbox \"b4c8ad544ade9b9763f64f260debc43f4ec74412eddf71d791c32abd00ed00e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:16.494424 systemd[1]: run-netns-cni\x2dfaa37521\x2de814\x2d3149\x2d6560\x2d81ec25ac23fe.mount: Deactivated successfully. May 15 12:53:16.496796 kubelet[2701]: E0515 12:53:16.496238 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c8ad544ade9b9763f64f260debc43f4ec74412eddf71d791c32abd00ed00e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:16.496796 kubelet[2701]: E0515 12:53:16.496294 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c8ad544ade9b9763f64f260debc43f4ec74412eddf71d791c32abd00ed00e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:16.496796 kubelet[2701]: E0515 12:53:16.496313 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c8ad544ade9b9763f64f260debc43f4ec74412eddf71d791c32abd00ed00e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:16.496796 kubelet[2701]: E0515 12:53:16.496357 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4c8ad544ade9b9763f64f260debc43f4ec74412eddf71d791c32abd00ed00e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:53:16.497185 containerd[1536]: time="2025-05-15T12:53:16.494535834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c8ad544ade9b9763f64f260debc43f4ec74412eddf71d791c32abd00ed00e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:16.858373 containerd[1536]: time="2025-05-15T12:53:16.858275329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4278684991: write /var/lib/containerd/tmpmounts/containerd-mount4278684991/usr/lib/calico/bpf/from_nat_info.o: no space left on device" May 15 12:53:16.858615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278684991.mount: Deactivated successfully. May 15 12:53:16.859021 containerd[1536]: time="2025-05-15T12:53:16.858621299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 12:53:16.859356 kubelet[2701]: E0515 12:53:16.859207 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4278684991: write /var/lib/containerd/tmpmounts/containerd-mount4278684991/usr/lib/calico/bpf/from_nat_info.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 12:53:16.859686 kubelet[2701]: E0515 12:53:16.859531 2701 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4278684991: write /var/lib/containerd/tmpmounts/containerd-mount4278684991/usr/lib/calico/bpf/from_nat_info.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 12:53:16.860052 kubelet[2701]: E0515 12:53:16.859922 2701 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zm75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-qxkgl_calico-system(f69f6551-7509-4bf1-a40a-1170e25b66f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4278684991: write /var/lib/containerd/tmpmounts/containerd-mount4278684991/usr/lib/calico/bpf/from_nat_info.o: no space left on device" logger="UnhandledError" May 15 12:53:16.862155 kubelet[2701]: E0515 12:53:16.862109 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4278684991: write /var/lib/containerd/tmpmounts/containerd-mount4278684991/usr/lib/calico/bpf/from_nat_info.o: no space left on device\"" pod="calico-system/calico-node-qxkgl" podUID="f69f6551-7509-4bf1-a40a-1170e25b66f0" May 15 12:53:20.407640 kubelet[2701]: E0515 12:53:20.407403 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:20.408918 containerd[1536]: time="2025-05-15T12:53:20.407613323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:53:20.408918 containerd[1536]: time="2025-05-15T12:53:20.408715964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:53:20.473685 containerd[1536]: time="2025-05-15T12:53:20.473640724Z" level=error msg="Failed to destroy network for sandbox \"8d485b63abf60fde231d2ca56ad8bc11ed9e348c3aeb62855af3233ca4596e7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:20.476188 systemd[1]: run-netns-cni\x2d954f9196\x2d7c18\x2d4cf2\x2d59df\x2daa360e886992.mount: Deactivated successfully. May 15 12:53:20.477111 containerd[1536]: time="2025-05-15T12:53:20.476243126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d485b63abf60fde231d2ca56ad8bc11ed9e348c3aeb62855af3233ca4596e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:20.477177 kubelet[2701]: E0515 12:53:20.476540 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d485b63abf60fde231d2ca56ad8bc11ed9e348c3aeb62855af3233ca4596e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:20.477177 kubelet[2701]: E0515 12:53:20.476597 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d485b63abf60fde231d2ca56ad8bc11ed9e348c3aeb62855af3233ca4596e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:20.477177 kubelet[2701]: E0515 12:53:20.476615 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d485b63abf60fde231d2ca56ad8bc11ed9e348c3aeb62855af3233ca4596e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:20.477177 kubelet[2701]: E0515 12:53:20.476658 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d485b63abf60fde231d2ca56ad8bc11ed9e348c3aeb62855af3233ca4596e7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:53:20.479386 containerd[1536]: time="2025-05-15T12:53:20.478509337Z" level=error msg="Failed to destroy network for sandbox \"02504454dedf5814e419b840fa1e508ede8dd2235c9ca3468b2b03e480903324\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:20.481509 containerd[1536]: time="2025-05-15T12:53:20.481367018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02504454dedf5814e419b840fa1e508ede8dd2235c9ca3468b2b03e480903324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:20.482270 kubelet[2701]: E0515 12:53:20.482194 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02504454dedf5814e419b840fa1e508ede8dd2235c9ca3468b2b03e480903324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:20.482270 kubelet[2701]: E0515 12:53:20.482234 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02504454dedf5814e419b840fa1e508ede8dd2235c9ca3468b2b03e480903324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:20.482950 kubelet[2701]: E0515 12:53:20.482485 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02504454dedf5814e419b840fa1e508ede8dd2235c9ca3468b2b03e480903324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:20.482950 kubelet[2701]: E0515 12:53:20.482534 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02504454dedf5814e419b840fa1e508ede8dd2235c9ca3468b2b03e480903324\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-svgwx" podUID="a44ca807-6ed0-447f-9c4f-1a10e61b025b" May 15 12:53:20.482704 systemd[1]: run-netns-cni\x2d1d9d5808\x2dd1e1\x2ddd00\x2d563b\x2d23593b167a45.mount: Deactivated successfully. May 15 12:53:20.758919 kubelet[2701]: I0515 12:53:20.758822 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:53:20.759621 kubelet[2701]: I0515 12:53:20.759590 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:53:20.764214 kubelet[2701]: I0515 12:53:20.764134 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:53:20.776322 kubelet[2701]: I0515 12:53:20.776095 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:53:20.776387 kubelet[2701]: I0515 12:53:20.776360 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6cff7bc5b6-sff94","kube-system/coredns-6f6b679f8f-svgwx","kube-system/coredns-6f6b679f8f-hfrjd","calico-system/csi-node-driver-d7hrh","calico-system/calico-node-qxkgl","tigera-operator/tigera-operator-6f6897fdc5-8rg8f","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:53:20.776451 kubelet[2701]: E0515 12:53:20.776388 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:20.776451 kubelet[2701]: E0515 12:53:20.776397 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:20.776451 kubelet[2701]: E0515 12:53:20.776404 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:20.776451 kubelet[2701]: E0515 12:53:20.776411 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:20.776451 kubelet[2701]: E0515 12:53:20.776417 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:53:20.777200 containerd[1536]: time="2025-05-15T12:53:20.777131560Z" level=info msg="StopContainer for \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" with timeout 2 (s)" May 15 12:53:20.778556 containerd[1536]: time="2025-05-15T12:53:20.778527871Z" level=info msg="Stop container \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" with signal terminated" May 15 12:53:20.800146 systemd[1]: cri-containerd-313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b.scope: Deactivated successfully. May 15 12:53:20.800934 systemd[1]: cri-containerd-313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b.scope: Consumed 1.943s CPU time, 31.2M memory peak. May 15 12:53:20.806119 containerd[1536]: time="2025-05-15T12:53:20.806073577Z" level=info msg="received exit event container_id:\"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" id:\"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" pid:3056 exited_at:{seconds:1747313600 nanos:805715117}" May 15 12:53:20.806546 containerd[1536]: time="2025-05-15T12:53:20.806521778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" id:\"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" pid:3056 exited_at:{seconds:1747313600 nanos:805715117}" May 15 12:53:20.826089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b-rootfs.mount: Deactivated successfully. May 15 12:53:20.834880 containerd[1536]: time="2025-05-15T12:53:20.834855595Z" level=info msg="StopContainer for \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" returns successfully" May 15 12:53:20.835409 containerd[1536]: time="2025-05-15T12:53:20.835386505Z" level=info msg="StopPodSandbox for \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\"" May 15 12:53:20.835478 containerd[1536]: time="2025-05-15T12:53:20.835436085Z" level=info msg="Container to stop \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:53:20.842089 systemd[1]: cri-containerd-7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1.scope: Deactivated successfully. May 15 12:53:20.845849 containerd[1536]: time="2025-05-15T12:53:20.845824052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" id:\"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" pid:2933 exit_status:137 exited_at:{seconds:1747313600 nanos:845448112}" May 15 12:53:20.872663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1-rootfs.mount: Deactivated successfully. May 15 12:53:20.873959 containerd[1536]: time="2025-05-15T12:53:20.873936549Z" level=info msg="shim disconnected" id=7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1 namespace=k8s.io May 15 12:53:20.874208 containerd[1536]: time="2025-05-15T12:53:20.874190409Z" level=warning msg="cleaning up after shim disconnected" id=7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1 namespace=k8s.io May 15 12:53:20.874287 containerd[1536]: time="2025-05-15T12:53:20.874251849Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:53:20.875256 containerd[1536]: time="2025-05-15T12:53:20.875196170Z" level=info msg="received exit event sandbox_id:\"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" exit_status:137 exited_at:{seconds:1747313600 nanos:845448112}" May 15 12:53:20.875692 containerd[1536]: time="2025-05-15T12:53:20.875635040Z" level=info msg="TearDown network for sandbox \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" successfully" May 15 12:53:20.875692 containerd[1536]: time="2025-05-15T12:53:20.875672500Z" level=info msg="StopPodSandbox for \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" returns successfully" May 15 12:53:20.893399 kubelet[2701]: I0515 12:53:20.893148 2701 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-6f6897fdc5-8rg8f" May 15 12:53:20.893399 kubelet[2701]: I0515 12:53:20.893172 2701 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-6f6897fdc5-8rg8f"] May 15 12:53:20.920311 kubelet[2701]: I0515 12:53:20.920245 2701 kubelet.go:2306] "Pod admission denied" podUID="502e149d-4f5e-4411-b18f-a6738c679dc7" pod="tigera-operator/tigera-operator-6f6897fdc5-89cm7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:20.946274 kubelet[2701]: I0515 12:53:20.946100 2701 kubelet.go:2306] "Pod admission denied" podUID="1aaf0ea7-0dd2-42de-8e0a-38624e369915" pod="tigera-operator/tigera-operator-6f6897fdc5-j6vw8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:20.969317 kubelet[2701]: I0515 12:53:20.969077 2701 kubelet.go:2306] "Pod admission denied" podUID="4ee04124-998a-4d57-98e8-f27afd465240" pod="tigera-operator/tigera-operator-6f6897fdc5-pqcbl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:20.991292 kubelet[2701]: I0515 12:53:20.991117 2701 kubelet.go:2306] "Pod admission denied" podUID="9abc13ee-bad4-49ca-bd34-62ef77c2b5ac" pod="tigera-operator/tigera-operator-6f6897fdc5-24r8h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.013621 kubelet[2701]: I0515 12:53:21.013518 2701 kubelet.go:2306] "Pod admission denied" podUID="e211a4cb-6e5c-445f-bb90-4c87e1bbe128" pod="tigera-operator/tigera-operator-6f6897fdc5-r9vrz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.038266 kubelet[2701]: I0515 12:53:21.038222 2701 kubelet.go:2306] "Pod admission denied" podUID="52a3704d-46cf-4236-83fc-ede0bc796ab9" pod="tigera-operator/tigera-operator-6f6897fdc5-kc5pp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.056106 kubelet[2701]: I0515 12:53:21.056077 2701 kubelet.go:2306] "Pod admission denied" podUID="36a6724c-b20d-4304-86cc-932aa50f30df" pod="tigera-operator/tigera-operator-6f6897fdc5-4gsl4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.057940 kubelet[2701]: I0515 12:53:21.057769 2701 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ffzt\" (UniqueName: \"kubernetes.io/projected/ad96d495-0fda-41e6-92fb-0740b7461b77-kube-api-access-2ffzt\") pod \"ad96d495-0fda-41e6-92fb-0740b7461b77\" (UID: \"ad96d495-0fda-41e6-92fb-0740b7461b77\") " May 15 12:53:21.059512 kubelet[2701]: I0515 12:53:21.059106 2701 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ad96d495-0fda-41e6-92fb-0740b7461b77-var-lib-calico\") pod \"ad96d495-0fda-41e6-92fb-0740b7461b77\" (UID: \"ad96d495-0fda-41e6-92fb-0740b7461b77\") " May 15 12:53:21.059512 kubelet[2701]: I0515 12:53:21.059189 2701 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad96d495-0fda-41e6-92fb-0740b7461b77-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "ad96d495-0fda-41e6-92fb-0740b7461b77" (UID: "ad96d495-0fda-41e6-92fb-0740b7461b77"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 12:53:21.066147 kubelet[2701]: I0515 12:53:21.065830 2701 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad96d495-0fda-41e6-92fb-0740b7461b77-kube-api-access-2ffzt" (OuterVolumeSpecName: "kube-api-access-2ffzt") pod "ad96d495-0fda-41e6-92fb-0740b7461b77" (UID: "ad96d495-0fda-41e6-92fb-0740b7461b77"). InnerVolumeSpecName "kube-api-access-2ffzt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 12:53:21.079520 kubelet[2701]: I0515 12:53:21.079487 2701 kubelet.go:2306] "Pod admission denied" podUID="bdf30cab-9d69-486b-9f9c-f494b28349d3" pod="tigera-operator/tigera-operator-6f6897fdc5-tfb8r" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.098237 kubelet[2701]: I0515 12:53:21.098175 2701 kubelet.go:2306] "Pod admission denied" podUID="91473189-368b-49a7-b18c-2756449ce45b" pod="tigera-operator/tigera-operator-6f6897fdc5-hfgcl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.160651 kubelet[2701]: I0515 12:53:21.160609 2701 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2ffzt\" (UniqueName: \"kubernetes.io/projected/ad96d495-0fda-41e6-92fb-0740b7461b77-kube-api-access-2ffzt\") on node \"172-232-9-197\" DevicePath \"\"" May 15 12:53:21.160651 kubelet[2701]: I0515 12:53:21.160636 2701 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ad96d495-0fda-41e6-92fb-0740b7461b77-var-lib-calico\") on node \"172-232-9-197\" DevicePath \"\"" May 15 12:53:21.220497 kubelet[2701]: I0515 12:53:21.220431 2701 kubelet.go:2306] "Pod admission denied" podUID="15768cd9-93e8-4abd-82c7-ecdd5b955a46" pod="tigera-operator/tigera-operator-6f6897fdc5-ldld8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.368910 kubelet[2701]: I0515 12:53:21.368850 2701 kubelet.go:2306] "Pod admission denied" podUID="50252f1c-23c3-4c1e-920c-b21fab976e4c" pod="tigera-operator/tigera-operator-6f6897fdc5-4mw8p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.413748 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1-shm.mount: Deactivated successfully. May 15 12:53:21.413869 systemd[1]: var-lib-kubelet-pods-ad96d495\x2d0fda\x2d41e6\x2d92fb\x2d0740b7461b77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2ffzt.mount: Deactivated successfully. May 15 12:53:21.468437 kubelet[2701]: I0515 12:53:21.468339 2701 kubelet.go:2306] "Pod admission denied" podUID="072563c9-c27c-4110-9c2c-d3d7b2b6026c" pod="tigera-operator/tigera-operator-6f6897fdc5-58tzq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.590665 kubelet[2701]: I0515 12:53:21.590629 2701 scope.go:117] "RemoveContainer" containerID="313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b" May 15 12:53:21.593500 containerd[1536]: time="2025-05-15T12:53:21.592845304Z" level=info msg="RemoveContainer for \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\"" May 15 12:53:21.596762 containerd[1536]: time="2025-05-15T12:53:21.596739607Z" level=info msg="RemoveContainer for \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" returns successfully" May 15 12:53:21.597322 kubelet[2701]: I0515 12:53:21.597016 2701 scope.go:117] "RemoveContainer" containerID="313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b" May 15 12:53:21.597559 containerd[1536]: time="2025-05-15T12:53:21.597351588Z" level=error msg="ContainerStatus for \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\": not found" May 15 12:53:21.597606 kubelet[2701]: E0515 12:53:21.597496 2701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\": not found" containerID="313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b" May 15 12:53:21.597606 kubelet[2701]: I0515 12:53:21.597517 2701 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b"} err="failed to get container status \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"313b955a5f158124ee7f5810a2c9359060133f898d0991db5963dadad919bd9b\": not found" May 15 12:53:21.598291 systemd[1]: Removed slice kubepods-besteffort-podad96d495_0fda_41e6_92fb_0740b7461b77.slice - libcontainer container kubepods-besteffort-podad96d495_0fda_41e6_92fb_0740b7461b77.slice. May 15 12:53:21.598627 systemd[1]: kubepods-besteffort-podad96d495_0fda_41e6_92fb_0740b7461b77.slice: Consumed 1.972s CPU time, 31.5M memory peak. May 15 12:53:21.617250 kubelet[2701]: I0515 12:53:21.616891 2701 kubelet.go:2306] "Pod admission denied" podUID="5037fe8e-e6cc-4df1-90a9-ef1cf004061b" pod="tigera-operator/tigera-operator-6f6897fdc5-lrxb5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.770299 kubelet[2701]: I0515 12:53:21.770187 2701 kubelet.go:2306] "Pod admission denied" podUID="d7f21166-b676-4b4e-bdd1-4654acd6d313" pod="tigera-operator/tigera-operator-6f6897fdc5-sl8sx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:21.893957 kubelet[2701]: I0515 12:53:21.893904 2701 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-6f6897fdc5-8rg8f"] May 15 12:53:21.918812 kubelet[2701]: I0515 12:53:21.918574 2701 kubelet.go:2306] "Pod admission denied" podUID="2648da4e-7b48-4e46-b6af-082c8683643f" pod="tigera-operator/tigera-operator-6f6897fdc5-hc2hz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:22.067684 kubelet[2701]: I0515 12:53:22.067161 2701 kubelet.go:2306] "Pod admission denied" podUID="f0ad17b5-ce1d-415c-bfa0-2ac04bf82c8a" pod="tigera-operator/tigera-operator-6f6897fdc5-qzb8m" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:22.218635 kubelet[2701]: I0515 12:53:22.218581 2701 kubelet.go:2306] "Pod admission denied" podUID="6236669f-0943-45d8-8aeb-07c9fc54545a" pod="tigera-operator/tigera-operator-6f6897fdc5-jwtfj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:22.367651 kubelet[2701]: I0515 12:53:22.367600 2701 kubelet.go:2306] "Pod admission denied" podUID="a071c50b-0a87-4fb0-bdda-9551d0c163d8" pod="tigera-operator/tigera-operator-6f6897fdc5-q8n98" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:22.518940 kubelet[2701]: I0515 12:53:22.518898 2701 kubelet.go:2306] "Pod admission denied" podUID="ced239fd-6083-4874-a901-59d93c167d47" pod="tigera-operator/tigera-operator-6f6897fdc5-6xk9q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:22.769895 kubelet[2701]: I0515 12:53:22.769435 2701 kubelet.go:2306] "Pod admission denied" podUID="87f16148-d91c-4328-8171-182d2c5785a8" pod="tigera-operator/tigera-operator-6f6897fdc5-4f8dr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:22.918619 kubelet[2701]: I0515 12:53:22.918574 2701 kubelet.go:2306] "Pod admission denied" podUID="683fd681-a5a4-423a-a500-9e2cf395457a" pod="tigera-operator/tigera-operator-6f6897fdc5-c77pm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:23.017572 kubelet[2701]: I0515 12:53:23.017521 2701 kubelet.go:2306] "Pod admission denied" podUID="c4a9f4c5-6a27-474d-a06e-95ca168cb88a" pod="tigera-operator/tigera-operator-6f6897fdc5-cwbzf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:23.168170 kubelet[2701]: I0515 12:53:23.168116 2701 kubelet.go:2306] "Pod admission denied" podUID="6886ceb4-bdb4-4c59-860b-1771df08ecac" pod="tigera-operator/tigera-operator-6f6897fdc5-rfzm4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:23.320521 kubelet[2701]: I0515 12:53:23.319936 2701 kubelet.go:2306] "Pod admission denied" podUID="d389a373-acde-4a7b-a54f-f82f988a27eb" pod="tigera-operator/tigera-operator-6f6897fdc5-fcjb6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:23.422256 kubelet[2701]: I0515 12:53:23.422139 2701 kubelet.go:2306] "Pod admission denied" podUID="1aab0771-0e47-4aa0-865c-57b6fd84c4a5" pod="tigera-operator/tigera-operator-6f6897fdc5-dmn4j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:23.517358 kubelet[2701]: I0515 12:53:23.517320 2701 kubelet.go:2306] "Pod admission denied" podUID="9816af14-5bc3-4173-8f07-73057a4a5461" pod="tigera-operator/tigera-operator-6f6897fdc5-fl29v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:23.616989 kubelet[2701]: I0515 12:53:23.616935 2701 kubelet.go:2306] "Pod admission denied" podUID="f16f5970-14a2-4371-a96c-3716fcbfc654" pod="tigera-operator/tigera-operator-6f6897fdc5-frxnq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:23.716720 kubelet[2701]: I0515 12:53:23.716612 2701 kubelet.go:2306] "Pod admission denied" podUID="697a38c0-111c-4312-8c3c-b111bfa880e9" pod="tigera-operator/tigera-operator-6f6897fdc5-94smt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:23.816516 kubelet[2701]: I0515 12:53:23.816452 2701 kubelet.go:2306] "Pod admission denied" podUID="c43697a2-5ce0-41a6-8c31-7ae2919433a7" pod="tigera-operator/tigera-operator-6f6897fdc5-l92q2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:23.919222 kubelet[2701]: I0515 12:53:23.918753 2701 kubelet.go:2306] "Pod admission denied" podUID="5242e6e4-0ce1-4a7c-9ec2-65d1141ad652" pod="tigera-operator/tigera-operator-6f6897fdc5-zc9rd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:24.120672 kubelet[2701]: I0515 12:53:24.120617 2701 kubelet.go:2306] "Pod admission denied" podUID="e709fb31-c049-404c-a8eb-cc94249fdacc" pod="tigera-operator/tigera-operator-6f6897fdc5-6qqpd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:24.219442 kubelet[2701]: I0515 12:53:24.219379 2701 kubelet.go:2306] "Pod admission denied" podUID="4ab9469f-ad94-46bf-9fb2-fe3d54504c16" pod="tigera-operator/tigera-operator-6f6897fdc5-sr9k6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:24.269838 kubelet[2701]: I0515 12:53:24.269799 2701 kubelet.go:2306] "Pod admission denied" podUID="e2818f01-755b-418e-9e90-b6fe1bcb736a" pod="tigera-operator/tigera-operator-6f6897fdc5-tfh4h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:24.369893 kubelet[2701]: I0515 12:53:24.369856 2701 kubelet.go:2306] "Pod admission denied" podUID="5d048bcd-89ee-478a-8d47-5cba54cde383" pod="tigera-operator/tigera-operator-6f6897fdc5-pj29q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:24.570556 kubelet[2701]: I0515 12:53:24.570388 2701 kubelet.go:2306] "Pod admission denied" podUID="1df7f033-5bd7-423d-b2e7-cf4b957fb553" pod="tigera-operator/tigera-operator-6f6897fdc5-6v6ns" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:24.670932 kubelet[2701]: I0515 12:53:24.670856 2701 kubelet.go:2306] "Pod admission denied" podUID="7c47f4a5-ca65-4f86-99a1-308ab05b1758" pod="tigera-operator/tigera-operator-6f6897fdc5-jbtgr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:24.770331 kubelet[2701]: I0515 12:53:24.770257 2701 kubelet.go:2306] "Pod admission denied" podUID="226402a4-479b-4cbd-b723-16b736bb8166" pod="tigera-operator/tigera-operator-6f6897fdc5-zgqdq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:24.915880 kubelet[2701]: I0515 12:53:24.915820 2701 kubelet.go:2306] "Pod admission denied" podUID="9c4755e9-4e3f-40f7-a25d-a6c4eee1e1ce" pod="tigera-operator/tigera-operator-6f6897fdc5-tjzq6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:25.035102 kubelet[2701]: I0515 12:53:25.035046 2701 kubelet.go:2306] "Pod admission denied" podUID="0e756870-41ab-488d-b237-9902cd4215ac" pod="tigera-operator/tigera-operator-6f6897fdc5-8znwb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:25.069224 kubelet[2701]: I0515 12:53:25.069143 2701 kubelet.go:2306] "Pod admission denied" podUID="07069685-15d8-435e-b5ac-fec5f42c1193" pod="tigera-operator/tigera-operator-6f6897fdc5-stxhh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:25.167876 kubelet[2701]: I0515 12:53:25.167160 2701 kubelet.go:2306] "Pod admission denied" podUID="d5bb8fcb-cb50-4ea1-8a42-65bdef8127fb" pod="tigera-operator/tigera-operator-6f6897fdc5-gcv5q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:25.270947 kubelet[2701]: I0515 12:53:25.270899 2701 kubelet.go:2306] "Pod admission denied" podUID="ec330ad5-0b61-4302-8f1d-ffa50ee18940" pod="tigera-operator/tigera-operator-6f6897fdc5-4blp4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:25.370564 kubelet[2701]: I0515 12:53:25.370522 2701 kubelet.go:2306] "Pod admission denied" podUID="c01f2f88-94ae-41cb-986c-a668bd588fd1" pod="tigera-operator/tigera-operator-6f6897fdc5-5gb26" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:25.469871 kubelet[2701]: I0515 12:53:25.469739 2701 kubelet.go:2306] "Pod admission denied" podUID="7e8183ec-cc95-4df7-863e-69f929205e27" pod="tigera-operator/tigera-operator-6f6897fdc5-6gnz2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:25.569323 kubelet[2701]: I0515 12:53:25.569276 2701 kubelet.go:2306] "Pod admission denied" podUID="a289d35c-ac3d-4f37-b6a5-460ccd459796" pod="tigera-operator/tigera-operator-6f6897fdc5-tcf4b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:25.771141 kubelet[2701]: I0515 12:53:25.770889 2701 kubelet.go:2306] "Pod admission denied" podUID="ebf711a5-50d6-4cc4-ae4c-a855fd287cb7" pod="tigera-operator/tigera-operator-6f6897fdc5-l8b8g" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:25.868768 kubelet[2701]: I0515 12:53:25.868722 2701 kubelet.go:2306] "Pod admission denied" podUID="dca9deb4-7261-40cd-8f88-8fe63660aafd" pod="tigera-operator/tigera-operator-6f6897fdc5-m8bz4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:25.919423 kubelet[2701]: I0515 12:53:25.919374 2701 kubelet.go:2306] "Pod admission denied" podUID="db5a9872-7f4e-4ca3-82c3-e877b1c044bc" pod="tigera-operator/tigera-operator-6f6897fdc5-2dnjz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:26.020529 kubelet[2701]: I0515 12:53:26.020478 2701 kubelet.go:2306] "Pod admission denied" podUID="5684fc74-1d02-4a34-8486-e1da8a4dcf97" pod="tigera-operator/tigera-operator-6f6897fdc5-kr7w4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:26.222675 kubelet[2701]: I0515 12:53:26.222188 2701 kubelet.go:2306] "Pod admission denied" podUID="70e789d0-a6f2-4fd1-b0cf-071f0f9700f0" pod="tigera-operator/tigera-operator-6f6897fdc5-l4fns" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:26.320886 kubelet[2701]: I0515 12:53:26.320850 2701 kubelet.go:2306] "Pod admission denied" podUID="1e1d1196-eeb0-4f7b-81c0-0ab54118a781" pod="tigera-operator/tigera-operator-6f6897fdc5-tdpb4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:26.369550 kubelet[2701]: I0515 12:53:26.368784 2701 kubelet.go:2306] "Pod admission denied" podUID="270c4c5c-d750-409b-b502-7cc36f4bd664" pod="tigera-operator/tigera-operator-6f6897fdc5-4qs9q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:26.470393 kubelet[2701]: I0515 12:53:26.470348 2701 kubelet.go:2306] "Pod admission denied" podUID="bce6e2e8-cc6f-4638-bba2-ef5b0ddc99f7" pod="tigera-operator/tigera-operator-6f6897fdc5-tvstc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:26.570921 kubelet[2701]: I0515 12:53:26.570614 2701 kubelet.go:2306] "Pod admission denied" podUID="fbc023e2-3255-4213-aa27-03a3dbda737c" pod="tigera-operator/tigera-operator-6f6897fdc5-t4w28" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:26.699115 kubelet[2701]: I0515 12:53:26.699047 2701 kubelet.go:2306] "Pod admission denied" podUID="0be62972-6e36-4485-b42c-b4d89ba812cc" pod="tigera-operator/tigera-operator-6f6897fdc5-z4rf7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:26.778444 kubelet[2701]: I0515 12:53:26.778368 2701 kubelet.go:2306] "Pod admission denied" podUID="d45b3546-d7fb-432f-8ae9-20816fca41af" pod="tigera-operator/tigera-operator-6f6897fdc5-pg46j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:26.881301 kubelet[2701]: I0515 12:53:26.881199 2701 kubelet.go:2306] "Pod admission denied" podUID="3fcdc2f8-9c8d-406a-af8b-cd29613c5a51" pod="tigera-operator/tigera-operator-6f6897fdc5-kxpd9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:26.969628 kubelet[2701]: I0515 12:53:26.969563 2701 kubelet.go:2306] "Pod admission denied" podUID="4ca666e6-33bb-42d6-81d2-1773fe56d9d7" pod="tigera-operator/tigera-operator-6f6897fdc5-6f9dd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:27.074279 kubelet[2701]: I0515 12:53:27.074212 2701 kubelet.go:2306] "Pod admission denied" podUID="73067e08-f285-40bc-8f8f-f79bbfb3bd5b" pod="tigera-operator/tigera-operator-6f6897fdc5-n2c9v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:27.273680 kubelet[2701]: I0515 12:53:27.273559 2701 kubelet.go:2306] "Pod admission denied" podUID="973f2002-c785-4331-a818-3bd9cf66bd15" pod="tigera-operator/tigera-operator-6f6897fdc5-qchhm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:27.369492 kubelet[2701]: I0515 12:53:27.369404 2701 kubelet.go:2306] "Pod admission denied" podUID="204e00be-e309-4a64-aac2-9fee322b3abe" pod="tigera-operator/tigera-operator-6f6897fdc5-ll7wv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:27.471258 kubelet[2701]: I0515 12:53:27.471217 2701 kubelet.go:2306] "Pod admission denied" podUID="7b0f9ef5-6795-4940-aab4-4035ca4c4898" pod="tigera-operator/tigera-operator-6f6897fdc5-8vtxp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:27.671248 kubelet[2701]: I0515 12:53:27.670517 2701 kubelet.go:2306] "Pod admission denied" podUID="c2b12195-2ee3-42c1-a319-1d21f60eea46" pod="tigera-operator/tigera-operator-6f6897fdc5-2xc9b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:27.772485 kubelet[2701]: I0515 12:53:27.772037 2701 kubelet.go:2306] "Pod admission denied" podUID="2ae9ed9f-cd5d-4845-923d-9bd82277f96f" pod="tigera-operator/tigera-operator-6f6897fdc5-gdq5f" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:27.868200 kubelet[2701]: I0515 12:53:27.868115 2701 kubelet.go:2306] "Pod admission denied" podUID="87b3a010-1d87-44f2-8b8e-e84902bc5b54" pod="tigera-operator/tigera-operator-6f6897fdc5-xnx4v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:27.968516 kubelet[2701]: I0515 12:53:27.968391 2701 kubelet.go:2306] "Pod admission denied" podUID="1dd53850-2324-41a6-ad43-ab16843a2ac2" pod="tigera-operator/tigera-operator-6f6897fdc5-c678p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:28.071032 kubelet[2701]: I0515 12:53:28.070965 2701 kubelet.go:2306] "Pod admission denied" podUID="278dc46c-7a5b-4601-9744-4888d3549bde" pod="tigera-operator/tigera-operator-6f6897fdc5-ds5nr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:28.270110 kubelet[2701]: I0515 12:53:28.269813 2701 kubelet.go:2306] "Pod admission denied" podUID="2e51887e-9a22-4d0b-a939-33cc0d6f7678" pod="tigera-operator/tigera-operator-6f6897fdc5-bfwp5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:28.370108 kubelet[2701]: I0515 12:53:28.370070 2701 kubelet.go:2306] "Pod admission denied" podUID="923e173f-97c0-431e-b4a5-72cb3352cef8" pod="tigera-operator/tigera-operator-6f6897fdc5-brxpk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:28.418022 kubelet[2701]: I0515 12:53:28.417973 2701 kubelet.go:2306] "Pod admission denied" podUID="fc63a20e-0091-471c-a53c-9292479c557d" pod="tigera-operator/tigera-operator-6f6897fdc5-ccx4q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:28.520566 kubelet[2701]: I0515 12:53:28.520447 2701 kubelet.go:2306] "Pod admission denied" podUID="855db615-69ab-4630-b7b3-b58634a029f2" pod="tigera-operator/tigera-operator-6f6897fdc5-qrzqd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:28.620967 kubelet[2701]: I0515 12:53:28.620826 2701 kubelet.go:2306] "Pod admission denied" podUID="7754d155-8bae-4287-ba09-9ed5f36b01f2" pod="tigera-operator/tigera-operator-6f6897fdc5-lc5g5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:28.721730 kubelet[2701]: I0515 12:53:28.721662 2701 kubelet.go:2306] "Pod admission denied" podUID="eb114175-8737-49f5-b08e-c5140ca74fe9" pod="tigera-operator/tigera-operator-6f6897fdc5-m5v8f" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:28.821053 kubelet[2701]: I0515 12:53:28.820841 2701 kubelet.go:2306] "Pod admission denied" podUID="01f1d78d-2be2-4609-8d84-08eab77e10c1" pod="tigera-operator/tigera-operator-6f6897fdc5-tqwnf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:28.921937 kubelet[2701]: I0515 12:53:28.921892 2701 kubelet.go:2306] "Pod admission denied" podUID="85e8627b-78a6-48c9-9b2c-e7586c254fe0" pod="tigera-operator/tigera-operator-6f6897fdc5-hmlqj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:29.020734 kubelet[2701]: I0515 12:53:29.020695 2701 kubelet.go:2306] "Pod admission denied" podUID="aa2f08b1-03fe-4090-ad96-8ae59a26aabd" pod="tigera-operator/tigera-operator-6f6897fdc5-tkkhs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:29.120038 kubelet[2701]: I0515 12:53:29.120003 2701 kubelet.go:2306] "Pod admission denied" podUID="c1312896-115a-4d4f-b3f2-8c6810e5fe1b" pod="tigera-operator/tigera-operator-6f6897fdc5-vr4bn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:29.220636 kubelet[2701]: I0515 12:53:29.220594 2701 kubelet.go:2306] "Pod admission denied" podUID="23166e85-5869-4d0d-9f69-12a0a6b6f491" pod="tigera-operator/tigera-operator-6f6897fdc5-zn84g" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:29.330134 kubelet[2701]: I0515 12:53:29.330077 2701 kubelet.go:2306] "Pod admission denied" podUID="25bc8af6-ac6e-48a9-b889-c89012d42ea5" pod="tigera-operator/tigera-operator-6f6897fdc5-4hn6t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:29.408064 kubelet[2701]: E0515 12:53:29.407529 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:29.408569 containerd[1536]: time="2025-05-15T12:53:29.408403706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:53:29.409349 kubelet[2701]: E0515 12:53:29.409249 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-qxkgl" podUID="f69f6551-7509-4bf1-a40a-1170e25b66f0" May 15 12:53:29.436576 kubelet[2701]: I0515 12:53:29.436532 2701 kubelet.go:2306] "Pod admission denied" podUID="053348cd-383d-4ab2-b2e4-26d44f2a30e3" pod="tigera-operator/tigera-operator-6f6897fdc5-fvr2d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:29.505404 containerd[1536]: time="2025-05-15T12:53:29.505356491Z" level=error msg="Failed to destroy network for sandbox \"b92fc324d721b989f39acb7f530617e2d1600578490b60f76235f85cf9c9e6f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:29.506909 containerd[1536]: time="2025-05-15T12:53:29.506848113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92fc324d721b989f39acb7f530617e2d1600578490b60f76235f85cf9c9e6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:29.507644 kubelet[2701]: E0515 12:53:29.507033 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92fc324d721b989f39acb7f530617e2d1600578490b60f76235f85cf9c9e6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:29.507644 kubelet[2701]: E0515 12:53:29.507088 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92fc324d721b989f39acb7f530617e2d1600578490b60f76235f85cf9c9e6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:29.507644 kubelet[2701]: E0515 12:53:29.507109 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92fc324d721b989f39acb7f530617e2d1600578490b60f76235f85cf9c9e6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:29.507644 kubelet[2701]: E0515 12:53:29.507143 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b92fc324d721b989f39acb7f530617e2d1600578490b60f76235f85cf9c9e6f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:53:29.508757 systemd[1]: run-netns-cni\x2d01c9c425\x2db4ea\x2d2914\x2dd611\x2d216c3013c48c.mount: Deactivated successfully. May 15 12:53:29.569493 kubelet[2701]: I0515 12:53:29.569441 2701 kubelet.go:2306] "Pod admission denied" podUID="e8bbe740-a0c2-4345-a16f-5925ba69bafa" pod="tigera-operator/tigera-operator-6f6897fdc5-lrnqq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:29.670919 kubelet[2701]: I0515 12:53:29.670539 2701 kubelet.go:2306] "Pod admission denied" podUID="e2004728-b01e-4e9d-91b0-e38c4d3a92d2" pod="tigera-operator/tigera-operator-6f6897fdc5-s4xnx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:29.869677 kubelet[2701]: I0515 12:53:29.869633 2701 kubelet.go:2306] "Pod admission denied" podUID="c96fc521-ca1d-4c30-9c59-209ba4eea117" pod="tigera-operator/tigera-operator-6f6897fdc5-cd6bj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:29.972038 kubelet[2701]: I0515 12:53:29.971317 2701 kubelet.go:2306] "Pod admission denied" podUID="1e55271a-6b49-4157-bd5a-6ae73f117ca2" pod="tigera-operator/tigera-operator-6f6897fdc5-x58gr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:30.019348 kubelet[2701]: I0515 12:53:30.019085 2701 kubelet.go:2306] "Pod admission denied" podUID="4424f706-b081-42a8-8a58-6623f3e0faef" pod="tigera-operator/tigera-operator-6f6897fdc5-md6qh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:30.120477 kubelet[2701]: I0515 12:53:30.120415 2701 kubelet.go:2306] "Pod admission denied" podUID="3083eb77-f222-4df9-86ed-cb5ac78d9826" pod="tigera-operator/tigera-operator-6f6897fdc5-d9256" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:30.219331 kubelet[2701]: I0515 12:53:30.219289 2701 kubelet.go:2306] "Pod admission denied" podUID="520aedc9-4d2c-4e45-9a35-b9ff81525e61" pod="tigera-operator/tigera-operator-6f6897fdc5-4pz2p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:30.269436 kubelet[2701]: I0515 12:53:30.269329 2701 kubelet.go:2306] "Pod admission denied" podUID="71a820e9-07ee-4efb-9158-cbeb69ea7778" pod="tigera-operator/tigera-operator-6f6897fdc5-949dt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:30.371530 kubelet[2701]: I0515 12:53:30.371449 2701 kubelet.go:2306] "Pod admission denied" podUID="33e0a601-02ee-44a2-9415-f83f72580d40" pod="tigera-operator/tigera-operator-6f6897fdc5-pvxmx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:30.407335 kubelet[2701]: E0515 12:53:30.407045 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:30.407753 containerd[1536]: time="2025-05-15T12:53:30.407716027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:53:30.464428 containerd[1536]: time="2025-05-15T12:53:30.464385988Z" level=error msg="Failed to destroy network for sandbox \"77fd5c6513dfd314e3f1cc95742914f87bda9bb8e7dcf1125b36c39511125a6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:30.466899 systemd[1]: run-netns-cni\x2d6d50a11f\x2df8f7\x2db95b\x2d9395\x2dc96239c0c0d1.mount: Deactivated successfully. May 15 12:53:30.467180 containerd[1536]: time="2025-05-15T12:53:30.467151150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77fd5c6513dfd314e3f1cc95742914f87bda9bb8e7dcf1125b36c39511125a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:30.467386 kubelet[2701]: E0515 12:53:30.467339 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77fd5c6513dfd314e3f1cc95742914f87bda9bb8e7dcf1125b36c39511125a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:30.467426 kubelet[2701]: E0515 12:53:30.467393 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77fd5c6513dfd314e3f1cc95742914f87bda9bb8e7dcf1125b36c39511125a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:30.467426 kubelet[2701]: E0515 12:53:30.467413 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77fd5c6513dfd314e3f1cc95742914f87bda9bb8e7dcf1125b36c39511125a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:30.467555 kubelet[2701]: E0515 12:53:30.467488 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77fd5c6513dfd314e3f1cc95742914f87bda9bb8e7dcf1125b36c39511125a6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfrjd" podUID="be5ba310-2fcd-4763-b9cf-8dba85ce0f76" May 15 12:53:30.569880 kubelet[2701]: I0515 12:53:30.569777 2701 kubelet.go:2306] "Pod admission denied" podUID="30d433b7-fe5c-4c0c-8e75-f86fb3912154" pod="tigera-operator/tigera-operator-6f6897fdc5-54ggr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:30.670441 kubelet[2701]: I0515 12:53:30.670395 2701 kubelet.go:2306] "Pod admission denied" podUID="8b0fe8ca-82ae-4120-9b1c-d74410bb7c5f" pod="tigera-operator/tigera-operator-6f6897fdc5-x2g55" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:30.785159 kubelet[2701]: I0515 12:53:30.785118 2701 kubelet.go:2306] "Pod admission denied" podUID="492a49cd-00e0-47bc-8159-9ee2d64472aa" pod="tigera-operator/tigera-operator-6f6897fdc5-b49nt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:30.876706 kubelet[2701]: I0515 12:53:30.876659 2701 kubelet.go:2306] "Pod admission denied" podUID="ff7d69be-ad22-433f-82da-83bdda997bfb" pod="tigera-operator/tigera-operator-6f6897fdc5-28575" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:30.973108 kubelet[2701]: I0515 12:53:30.972999 2701 kubelet.go:2306] "Pod admission denied" podUID="abdb43d6-1758-4aa8-b72d-c0d53a9a4524" pod="tigera-operator/tigera-operator-6f6897fdc5-59skd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:31.069704 kubelet[2701]: I0515 12:53:31.069438 2701 kubelet.go:2306] "Pod admission denied" podUID="2394375c-8919-4f5e-9227-f0862779c2d2" pod="tigera-operator/tigera-operator-6f6897fdc5-6mww7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:31.169375 kubelet[2701]: I0515 12:53:31.169287 2701 kubelet.go:2306] "Pod admission denied" podUID="4dec53c0-e4f1-4eeb-a6cd-4dabf30a7d54" pod="tigera-operator/tigera-operator-6f6897fdc5-cvndd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:31.272745 kubelet[2701]: I0515 12:53:31.272697 2701 kubelet.go:2306] "Pod admission denied" podUID="c881385c-f227-4e83-89cd-abdd63a7458b" pod="tigera-operator/tigera-operator-6f6897fdc5-mpjr6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:31.377589 kubelet[2701]: I0515 12:53:31.376764 2701 kubelet.go:2306] "Pod admission denied" podUID="76fa375c-bab2-4055-ac38-7a82c59f57e6" pod="tigera-operator/tigera-operator-6f6897fdc5-kdhcj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:31.569640 kubelet[2701]: I0515 12:53:31.569501 2701 kubelet.go:2306] "Pod admission denied" podUID="81069228-c2e9-42b3-a273-82c70e65c020" pod="tigera-operator/tigera-operator-6f6897fdc5-hbnvw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:31.671025 kubelet[2701]: I0515 12:53:31.670959 2701 kubelet.go:2306] "Pod admission denied" podUID="8a9414d6-7fbe-4480-bec5-ecb84f562088" pod="tigera-operator/tigera-operator-6f6897fdc5-bg8j7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:31.773193 kubelet[2701]: I0515 12:53:31.773144 2701 kubelet.go:2306] "Pod admission denied" podUID="4462b426-67c8-46a1-b676-95dcdcdaaf04" pod="tigera-operator/tigera-operator-6f6897fdc5-rhzk8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:31.872698 kubelet[2701]: I0515 12:53:31.872644 2701 kubelet.go:2306] "Pod admission denied" podUID="8efd3068-a84a-425c-82e7-8cb17ccce9b5" pod="tigera-operator/tigera-operator-6f6897fdc5-wl4dm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:31.928863 kubelet[2701]: I0515 12:53:31.928828 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:53:31.928863 kubelet[2701]: I0515 12:53:31.928866 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:53:31.930483 containerd[1536]: time="2025-05-15T12:53:31.930228745Z" level=info msg="StopPodSandbox for \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\"" May 15 12:53:31.930483 containerd[1536]: time="2025-05-15T12:53:31.930388165Z" level=info msg="TearDown network for sandbox \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" successfully" May 15 12:53:31.930483 containerd[1536]: time="2025-05-15T12:53:31.930399285Z" level=info msg="StopPodSandbox for \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" returns successfully" May 15 12:53:31.933114 containerd[1536]: time="2025-05-15T12:53:31.931274935Z" level=info msg="RemovePodSandbox for \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\"" May 15 12:53:31.933114 containerd[1536]: time="2025-05-15T12:53:31.931297445Z" level=info msg="Forcibly stopping sandbox \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\"" May 15 12:53:31.933114 containerd[1536]: time="2025-05-15T12:53:31.931349205Z" level=info msg="TearDown network for sandbox \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" successfully" May 15 12:53:31.933114 containerd[1536]: time="2025-05-15T12:53:31.932722376Z" level=info msg="Ensure that sandbox 7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1 in task-service has been cleanup successfully" May 15 12:53:31.934657 containerd[1536]: time="2025-05-15T12:53:31.934615278Z" level=info msg="RemovePodSandbox \"7f168a58389272332e2519584a46f384575c19cf860482036ce9b51a0fc3a0e1\" returns successfully" May 15 12:53:31.935296 kubelet[2701]: I0515 12:53:31.935280 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:53:31.945742 kubelet[2701]: I0515 12:53:31.945718 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:53:31.945845 kubelet[2701]: I0515 12:53:31.945788 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-hfrjd","calico-system/calico-kube-controllers-6cff7bc5b6-sff94","kube-system/coredns-6f6b679f8f-svgwx","calico-system/csi-node-driver-d7hrh","calico-system/calico-node-qxkgl","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:53:31.945939 kubelet[2701]: E0515 12:53:31.945845 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:31.945939 kubelet[2701]: E0515 12:53:31.945856 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:31.945939 kubelet[2701]: E0515 12:53:31.945863 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:31.945939 kubelet[2701]: E0515 12:53:31.945870 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:31.945939 kubelet[2701]: E0515 12:53:31.945876 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:53:31.945939 kubelet[2701]: E0515 12:53:31.945887 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:53:31.945939 kubelet[2701]: E0515 12:53:31.945896 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:53:31.945939 kubelet[2701]: E0515 12:53:31.945905 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhz8r" May 15 12:53:31.945939 kubelet[2701]: E0515 12:53:31.945912 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:53:31.945939 kubelet[2701]: E0515 12:53:31.945920 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:53:31.945939 kubelet[2701]: I0515 12:53:31.945928 2701 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 12:53:31.969763 kubelet[2701]: I0515 12:53:31.969703 2701 kubelet.go:2306] "Pod admission denied" podUID="31034b96-581a-4e72-a3fa-701720c24036" pod="tigera-operator/tigera-operator-6f6897fdc5-bdlbs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.071164 kubelet[2701]: I0515 12:53:32.071118 2701 kubelet.go:2306] "Pod admission denied" podUID="7990c2dc-09cc-452f-86a5-d9d3910342ed" pod="tigera-operator/tigera-operator-6f6897fdc5-nvdpp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.169384 kubelet[2701]: I0515 12:53:32.168825 2701 kubelet.go:2306] "Pod admission denied" podUID="133d4960-2075-4f87-8b61-375fb306dd92" pod="tigera-operator/tigera-operator-6f6897fdc5-829vk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.272441 kubelet[2701]: I0515 12:53:32.272371 2701 kubelet.go:2306] "Pod admission denied" podUID="4575c2c1-403c-4bd8-b7e2-8d3cf25e033a" pod="tigera-operator/tigera-operator-6f6897fdc5-jw98k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.320160 kubelet[2701]: I0515 12:53:32.320120 2701 kubelet.go:2306] "Pod admission denied" podUID="afc90983-a041-42e6-a167-1d31f9a6743d" pod="tigera-operator/tigera-operator-6f6897fdc5-kfh9w" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.408008 containerd[1536]: time="2025-05-15T12:53:32.407938859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:53:32.435008 kubelet[2701]: I0515 12:53:32.434823 2701 kubelet.go:2306] "Pod admission denied" podUID="130fff40-26c4-4407-9c7e-096319165ffc" pod="tigera-operator/tigera-operator-6f6897fdc5-sqzm7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.484075 containerd[1536]: time="2025-05-15T12:53:32.484028601Z" level=error msg="Failed to destroy network for sandbox \"d3344653229922d9522f88b9864e42738be9d9a5a963aa550590c66f038f5e78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:32.486450 systemd[1]: run-netns-cni\x2d13bafde8\x2d6bbb\x2d7653\x2d9830\x2d32c2113eed90.mount: Deactivated successfully. May 15 12:53:32.487894 containerd[1536]: time="2025-05-15T12:53:32.487736183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3344653229922d9522f88b9864e42738be9d9a5a963aa550590c66f038f5e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:32.488341 kubelet[2701]: E0515 12:53:32.488301 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3344653229922d9522f88b9864e42738be9d9a5a963aa550590c66f038f5e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:32.488659 kubelet[2701]: E0515 12:53:32.488570 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3344653229922d9522f88b9864e42738be9d9a5a963aa550590c66f038f5e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:32.488659 kubelet[2701]: E0515 12:53:32.488630 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3344653229922d9522f88b9864e42738be9d9a5a963aa550590c66f038f5e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:32.488705 kubelet[2701]: E0515 12:53:32.488673 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3344653229922d9522f88b9864e42738be9d9a5a963aa550590c66f038f5e78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:53:32.521625 kubelet[2701]: I0515 12:53:32.521578 2701 kubelet.go:2306] "Pod admission denied" podUID="7e9e3db8-563a-4216-9121-003eec145c4c" pod="tigera-operator/tigera-operator-6f6897fdc5-bm9ds" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.623388 kubelet[2701]: I0515 12:53:32.623347 2701 kubelet.go:2306] "Pod admission denied" podUID="c5883068-12ae-40aa-82c2-081678b35bab" pod="tigera-operator/tigera-operator-6f6897fdc5-8hnbk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.725295 kubelet[2701]: I0515 12:53:32.724849 2701 kubelet.go:2306] "Pod admission denied" podUID="9acb9b61-4eaa-4574-9e6d-de3c97726976" pod="tigera-operator/tigera-operator-6f6897fdc5-c6bcm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.823005 kubelet[2701]: I0515 12:53:32.822965 2701 kubelet.go:2306] "Pod admission denied" podUID="6941bd54-c324-4e03-b581-1e0daf73caa2" pod="tigera-operator/tigera-operator-6f6897fdc5-gwxsp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.923211 kubelet[2701]: I0515 12:53:32.923176 2701 kubelet.go:2306] "Pod admission denied" podUID="79335b0a-f74f-4584-b302-9ab7940bd582" pod="tigera-operator/tigera-operator-6f6897fdc5-r9sm2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:32.970153 kubelet[2701]: I0515 12:53:32.970116 2701 kubelet.go:2306] "Pod admission denied" podUID="74fb256e-971f-4523-a9ca-84a127848ec6" pod="tigera-operator/tigera-operator-6f6897fdc5-xmd6x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:33.072367 kubelet[2701]: I0515 12:53:33.072095 2701 kubelet.go:2306] "Pod admission denied" podUID="9495bc1b-7d0b-4174-92f2-8bd564a24e02" pod="tigera-operator/tigera-operator-6f6897fdc5-kgkm7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:33.172404 kubelet[2701]: I0515 12:53:33.172138 2701 kubelet.go:2306] "Pod admission denied" podUID="ebe46244-2b84-422d-8fe4-363913909a8d" pod="tigera-operator/tigera-operator-6f6897fdc5-cs4qv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:33.272313 kubelet[2701]: I0515 12:53:33.272272 2701 kubelet.go:2306] "Pod admission denied" podUID="621d9751-70d5-4260-a285-f9e744af5f0d" pod="tigera-operator/tigera-operator-6f6897fdc5-lzrct" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:33.408153 kubelet[2701]: E0515 12:53:33.407514 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:33.472381 kubelet[2701]: I0515 12:53:33.472337 2701 kubelet.go:2306] "Pod admission denied" podUID="3559bc13-fb48-4e63-b261-c0bdee0c6644" pod="tigera-operator/tigera-operator-6f6897fdc5-dnstl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:33.573557 kubelet[2701]: I0515 12:53:33.573497 2701 kubelet.go:2306] "Pod admission denied" podUID="be376a75-979a-43f8-9968-040b3c0b393c" pod="tigera-operator/tigera-operator-6f6897fdc5-czt7l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:33.672486 kubelet[2701]: I0515 12:53:33.671696 2701 kubelet.go:2306] "Pod admission denied" podUID="e4be84dc-8871-4965-9d01-018d5fd50e2d" pod="tigera-operator/tigera-operator-6f6897fdc5-mpkr4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:33.874510 kubelet[2701]: I0515 12:53:33.874439 2701 kubelet.go:2306] "Pod admission denied" podUID="ca9b1fc6-328a-4e9d-8855-2004f0e8c361" pod="tigera-operator/tigera-operator-6f6897fdc5-cshvt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:33.975062 kubelet[2701]: I0515 12:53:33.974730 2701 kubelet.go:2306] "Pod admission denied" podUID="c4bbe1f8-2d5a-44ac-ba97-d21feb01535b" pod="tigera-operator/tigera-operator-6f6897fdc5-stkx7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:34.077484 kubelet[2701]: I0515 12:53:34.075732 2701 kubelet.go:2306] "Pod admission denied" podUID="df32efab-c983-4090-9dce-8c2aa3acd00a" pod="tigera-operator/tigera-operator-6f6897fdc5-b97mx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:34.273017 kubelet[2701]: I0515 12:53:34.272587 2701 kubelet.go:2306] "Pod admission denied" podUID="8dcae55a-d6f3-49ec-9dee-926b480a2077" pod="tigera-operator/tigera-operator-6f6897fdc5-5r6kk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:34.371626 kubelet[2701]: I0515 12:53:34.371565 2701 kubelet.go:2306] "Pod admission denied" podUID="fbd0f223-cb36-491f-b041-1135e527db95" pod="tigera-operator/tigera-operator-6f6897fdc5-s6crz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:34.419588 kubelet[2701]: I0515 12:53:34.419557 2701 kubelet.go:2306] "Pod admission denied" podUID="d98e75d9-9dc8-4a4e-921b-c166286e7d23" pod="tigera-operator/tigera-operator-6f6897fdc5-ptsph" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:34.521958 kubelet[2701]: I0515 12:53:34.521914 2701 kubelet.go:2306] "Pod admission denied" podUID="bab5b1cd-43b6-45dd-974f-5042f8dec7a9" pod="tigera-operator/tigera-operator-6f6897fdc5-bd47p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:34.722519 kubelet[2701]: I0515 12:53:34.722476 2701 kubelet.go:2306] "Pod admission denied" podUID="9674143c-172b-46be-9295-84c8e23deb98" pod="tigera-operator/tigera-operator-6f6897fdc5-7tmbg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:34.826537 kubelet[2701]: I0515 12:53:34.826501 2701 kubelet.go:2306] "Pod admission denied" podUID="4f72c349-fec3-4e27-88fb-716b915c6d51" pod="tigera-operator/tigera-operator-6f6897fdc5-kh2mn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:34.921543 kubelet[2701]: I0515 12:53:34.921495 2701 kubelet.go:2306] "Pod admission denied" podUID="f0f6ad2e-3eba-434c-8f8d-2724453f7f62" pod="tigera-operator/tigera-operator-6f6897fdc5-stlq6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:35.122125 kubelet[2701]: I0515 12:53:35.122062 2701 kubelet.go:2306] "Pod admission denied" podUID="a1da9c68-e4bf-48ba-92e0-6087f0809b3f" pod="tigera-operator/tigera-operator-6f6897fdc5-dq9dn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:35.221814 kubelet[2701]: I0515 12:53:35.221757 2701 kubelet.go:2306] "Pod admission denied" podUID="301e229f-4528-497b-87c9-09ca09acfbd3" pod="tigera-operator/tigera-operator-6f6897fdc5-55xl5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:35.272682 kubelet[2701]: I0515 12:53:35.272641 2701 kubelet.go:2306] "Pod admission denied" podUID="8754977b-a11f-4854-a51d-3a686e489a75" pod="tigera-operator/tigera-operator-6f6897fdc5-9spmq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:35.371808 kubelet[2701]: I0515 12:53:35.371762 2701 kubelet.go:2306] "Pod admission denied" podUID="b0748938-c279-49a5-aa88-ac78bf9155de" pod="tigera-operator/tigera-operator-6f6897fdc5-8rs58" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:35.407646 kubelet[2701]: E0515 12:53:35.407537 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:35.408532 containerd[1536]: time="2025-05-15T12:53:35.408239414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:53:35.499689 kubelet[2701]: I0515 12:53:35.497864 2701 kubelet.go:2306] "Pod admission denied" podUID="d55341c8-021b-41d0-a4c1-cdbfa9ba64b5" pod="tigera-operator/tigera-operator-6f6897fdc5-9qknv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:35.521669 containerd[1536]: time="2025-05-15T12:53:35.521407672Z" level=error msg="Failed to destroy network for sandbox \"b267bf18db09efc70a58cc9e5ee907922c3cd8d95b95169531b85d354747547b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:35.525602 containerd[1536]: time="2025-05-15T12:53:35.525225677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b267bf18db09efc70a58cc9e5ee907922c3cd8d95b95169531b85d354747547b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:35.526325 kubelet[2701]: E0515 12:53:35.526009 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b267bf18db09efc70a58cc9e5ee907922c3cd8d95b95169531b85d354747547b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:35.526325 kubelet[2701]: E0515 12:53:35.526061 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b267bf18db09efc70a58cc9e5ee907922c3cd8d95b95169531b85d354747547b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:35.526325 kubelet[2701]: E0515 12:53:35.526090 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b267bf18db09efc70a58cc9e5ee907922c3cd8d95b95169531b85d354747547b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:35.526325 kubelet[2701]: E0515 12:53:35.526127 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b267bf18db09efc70a58cc9e5ee907922c3cd8d95b95169531b85d354747547b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-svgwx" podUID="a44ca807-6ed0-447f-9c4f-1a10e61b025b" May 15 12:53:35.526850 systemd[1]: run-netns-cni\x2d80dd70d8\x2d2983\x2d4ff1\x2d3c77\x2d94290016694c.mount: Deactivated successfully. May 15 12:53:35.571319 kubelet[2701]: I0515 12:53:35.571269 2701 kubelet.go:2306] "Pod admission denied" podUID="271e9670-0254-4987-8d90-6676be1128ec" pod="tigera-operator/tigera-operator-6f6897fdc5-www8r" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:35.670744 kubelet[2701]: I0515 12:53:35.670605 2701 kubelet.go:2306] "Pod admission denied" podUID="fea099c5-3058-4abc-b3e0-04ba21f0efba" pod="tigera-operator/tigera-operator-6f6897fdc5-wsrms" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:35.773227 kubelet[2701]: I0515 12:53:35.773188 2701 kubelet.go:2306] "Pod admission denied" podUID="adbec479-4e51-4c2f-b915-6c10938d34ff" pod="tigera-operator/tigera-operator-6f6897fdc5-w4q6t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:35.871326 kubelet[2701]: I0515 12:53:35.871282 2701 kubelet.go:2306] "Pod admission denied" podUID="0c32be65-0e2c-4cfe-bb36-959ad27cda79" pod="tigera-operator/tigera-operator-6f6897fdc5-6q5dk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:35.921158 kubelet[2701]: I0515 12:53:35.921057 2701 kubelet.go:2306] "Pod admission denied" podUID="d4b5aec1-db9c-4a13-a3a1-1cd6e7702c6a" pod="tigera-operator/tigera-operator-6f6897fdc5-rcfsz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:36.030234 kubelet[2701]: I0515 12:53:36.029586 2701 kubelet.go:2306] "Pod admission denied" podUID="8396635c-334e-45e5-8ae8-6181b0a99c5f" pod="tigera-operator/tigera-operator-6f6897fdc5-pkpmm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:36.121700 kubelet[2701]: I0515 12:53:36.121642 2701 kubelet.go:2306] "Pod admission denied" podUID="4071a174-963e-4587-b939-73cbef16fe1f" pod="tigera-operator/tigera-operator-6f6897fdc5-ffq4j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:36.220751 kubelet[2701]: I0515 12:53:36.220616 2701 kubelet.go:2306] "Pod admission denied" podUID="c1a68aaa-6649-4460-8eff-2bf475dbf39d" pod="tigera-operator/tigera-operator-6f6897fdc5-976b4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:36.421756 kubelet[2701]: I0515 12:53:36.421708 2701 kubelet.go:2306] "Pod admission denied" podUID="06759595-8e0e-4749-a5d5-bf06b90da668" pod="tigera-operator/tigera-operator-6f6897fdc5-qphl7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:36.523201 kubelet[2701]: I0515 12:53:36.522907 2701 kubelet.go:2306] "Pod admission denied" podUID="5eed1b9c-4a83-40f0-a541-4c0e53d6f81d" pod="tigera-operator/tigera-operator-6f6897fdc5-6dw2b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:36.623547 kubelet[2701]: I0515 12:53:36.623308 2701 kubelet.go:2306] "Pod admission denied" podUID="50e88d42-1c5f-41f0-980b-9ddb2152199e" pod="tigera-operator/tigera-operator-6f6897fdc5-jq5xn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:36.723795 kubelet[2701]: I0515 12:53:36.723746 2701 kubelet.go:2306] "Pod admission denied" podUID="ba425860-95b1-4b9a-86ed-7f3c33ad0af7" pod="tigera-operator/tigera-operator-6f6897fdc5-mg529" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:36.775055 kubelet[2701]: I0515 12:53:36.774919 2701 kubelet.go:2306] "Pod admission denied" podUID="829a0e7e-ff6c-4349-b4a6-5e27d5a08f39" pod="tigera-operator/tigera-operator-6f6897fdc5-wpr75" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:36.871584 kubelet[2701]: I0515 12:53:36.871532 2701 kubelet.go:2306] "Pod admission denied" podUID="212059b7-fb42-4201-af3b-b95f71e0b627" pod="tigera-operator/tigera-operator-6f6897fdc5-6hwfp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:36.973087 kubelet[2701]: I0515 12:53:36.973047 2701 kubelet.go:2306] "Pod admission denied" podUID="97e191dd-3f4b-4dee-a858-c3ef44fec97f" pod="tigera-operator/tigera-operator-6f6897fdc5-ph6cv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:37.023391 kubelet[2701]: I0515 12:53:37.023354 2701 kubelet.go:2306] "Pod admission denied" podUID="d7d75f81-e9d6-4453-8269-23b42c86e0aa" pod="tigera-operator/tigera-operator-6f6897fdc5-2hj5v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:37.131522 kubelet[2701]: I0515 12:53:37.130841 2701 kubelet.go:2306] "Pod admission denied" podUID="f688c9df-4570-44b6-afba-273b61ed8d37" pod="tigera-operator/tigera-operator-6f6897fdc5-hms6l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:37.321573 kubelet[2701]: I0515 12:53:37.321534 2701 kubelet.go:2306] "Pod admission denied" podUID="3a41dd2a-3556-4c4a-92a4-cd7fb50e6413" pod="tigera-operator/tigera-operator-6f6897fdc5-69lfz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:37.422202 kubelet[2701]: I0515 12:53:37.422095 2701 kubelet.go:2306] "Pod admission denied" podUID="9c39c434-fcff-4deb-afac-7bbb19a2f167" pod="tigera-operator/tigera-operator-6f6897fdc5-gbqxr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:37.491637 kubelet[2701]: I0515 12:53:37.491586 2701 kubelet.go:2306] "Pod admission denied" podUID="b727384e-99ec-457c-aaf1-f61ed4946057" pod="tigera-operator/tigera-operator-6f6897fdc5-j5dv4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:37.572959 kubelet[2701]: I0515 12:53:37.572917 2701 kubelet.go:2306] "Pod admission denied" podUID="4d3cdcb6-8293-4bf2-bbca-f8ff597c0c7a" pod="tigera-operator/tigera-operator-6f6897fdc5-jmglw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:37.672772 kubelet[2701]: I0515 12:53:37.672659 2701 kubelet.go:2306] "Pod admission denied" podUID="a110600c-d681-4be3-8254-b8aed0a91842" pod="tigera-operator/tigera-operator-6f6897fdc5-7rdnx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:37.773726 kubelet[2701]: I0515 12:53:37.773685 2701 kubelet.go:2306] "Pod admission denied" podUID="c8a5b61b-44e5-4bd4-b2ae-805b66306cb9" pod="tigera-operator/tigera-operator-6f6897fdc5-rz5kh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:37.871841 kubelet[2701]: I0515 12:53:37.871796 2701 kubelet.go:2306] "Pod admission denied" podUID="81e613ee-644f-41f1-be0d-7e73fc5860dd" pod="tigera-operator/tigera-operator-6f6897fdc5-zqxdn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:37.974023 kubelet[2701]: I0515 12:53:37.973909 2701 kubelet.go:2306] "Pod admission denied" podUID="267a9bf9-2fa4-4180-8c9d-17a16f2c2552" pod="tigera-operator/tigera-operator-6f6897fdc5-bw6tc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:38.173364 kubelet[2701]: I0515 12:53:38.173318 2701 kubelet.go:2306] "Pod admission denied" podUID="36e45aa4-5139-448c-b287-773fad5396e9" pod="tigera-operator/tigera-operator-6f6897fdc5-nfwmv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:38.273349 kubelet[2701]: I0515 12:53:38.273230 2701 kubelet.go:2306] "Pod admission denied" podUID="478f114a-06d9-4e98-9acf-e599fd791452" pod="tigera-operator/tigera-operator-6f6897fdc5-sv9c4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:38.380740 kubelet[2701]: I0515 12:53:38.379741 2701 kubelet.go:2306] "Pod admission denied" podUID="f03d2796-9678-4143-8f67-de93ad81a5f1" pod="tigera-operator/tigera-operator-6f6897fdc5-sq6w6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:38.408208 kubelet[2701]: E0515 12:53:38.408119 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:38.409389 kubelet[2701]: E0515 12:53:38.409309 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:38.573170 kubelet[2701]: I0515 12:53:38.573131 2701 kubelet.go:2306] "Pod admission denied" podUID="053e1692-638a-4f15-a6b3-597f4fcdeabf" pod="tigera-operator/tigera-operator-6f6897fdc5-jklg9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:38.673108 kubelet[2701]: I0515 12:53:38.673053 2701 kubelet.go:2306] "Pod admission denied" podUID="94ccaa0d-559a-4a43-8625-cc057f0bf634" pod="tigera-operator/tigera-operator-6f6897fdc5-9zwdd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:38.720398 kubelet[2701]: I0515 12:53:38.720359 2701 kubelet.go:2306] "Pod admission denied" podUID="8e105a77-0c17-4490-8b42-d92806e67474" pod="tigera-operator/tigera-operator-6f6897fdc5-czvcb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:38.823146 kubelet[2701]: I0515 12:53:38.823101 2701 kubelet.go:2306] "Pod admission denied" podUID="7b136f4c-9fc9-4a34-aa36-e873903572d3" pod="tigera-operator/tigera-operator-6f6897fdc5-76bwz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:38.921245 kubelet[2701]: I0515 12:53:38.920997 2701 kubelet.go:2306] "Pod admission denied" podUID="2d638053-e28d-46ff-aa6b-2c255aa4aa8d" pod="tigera-operator/tigera-operator-6f6897fdc5-5svb9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:39.019561 kubelet[2701]: I0515 12:53:39.019527 2701 kubelet.go:2306] "Pod admission denied" podUID="89918eec-b215-4271-aaff-5df4f8a45e5a" pod="tigera-operator/tigera-operator-6f6897fdc5-7zwgr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:39.223539 kubelet[2701]: I0515 12:53:39.223395 2701 kubelet.go:2306] "Pod admission denied" podUID="0a55a524-afda-4909-a928-5098a4aaa145" pod="tigera-operator/tigera-operator-6f6897fdc5-ccwg7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:39.323212 kubelet[2701]: I0515 12:53:39.323169 2701 kubelet.go:2306] "Pod admission denied" podUID="3a7ea6a5-f0d4-4c19-932a-aab518c87312" pod="tigera-operator/tigera-operator-6f6897fdc5-ct872" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:39.437649 kubelet[2701]: I0515 12:53:39.437560 2701 kubelet.go:2306] "Pod admission denied" podUID="e01b626d-a311-48ed-b448-3ebf1e2d8464" pod="tigera-operator/tigera-operator-6f6897fdc5-ptq85" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:39.523751 kubelet[2701]: I0515 12:53:39.523319 2701 kubelet.go:2306] "Pod admission denied" podUID="c76af282-2afa-4627-879e-6ed48c56751d" pod="tigera-operator/tigera-operator-6f6897fdc5-fvjnv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:39.624542 kubelet[2701]: I0515 12:53:39.624498 2701 kubelet.go:2306] "Pod admission denied" podUID="2ea1561c-baa6-4698-b7b8-6f7c6ef22965" pod="tigera-operator/tigera-operator-6f6897fdc5-qww5s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:39.822992 kubelet[2701]: I0515 12:53:39.822964 2701 kubelet.go:2306] "Pod admission denied" podUID="8fd53d96-69b5-4e6a-b5a0-c6cf1fab5ac8" pod="tigera-operator/tigera-operator-6f6897fdc5-84jkz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:39.925482 kubelet[2701]: I0515 12:53:39.924979 2701 kubelet.go:2306] "Pod admission denied" podUID="ed0cf5d9-5932-46cc-9a11-65f73fcbe22a" pod="tigera-operator/tigera-operator-6f6897fdc5-n4596" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:39.969570 kubelet[2701]: I0515 12:53:39.969527 2701 kubelet.go:2306] "Pod admission denied" podUID="437cb457-6a3c-4af4-8279-93464e09ff71" pod="tigera-operator/tigera-operator-6f6897fdc5-lvgqc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:40.075134 kubelet[2701]: I0515 12:53:40.074881 2701 kubelet.go:2306] "Pod admission denied" podUID="1466c4cf-4ab4-45fa-8fea-a02f584de6a6" pod="tigera-operator/tigera-operator-6f6897fdc5-jrr49" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:40.274436 kubelet[2701]: I0515 12:53:40.274391 2701 kubelet.go:2306] "Pod admission denied" podUID="4b841cf2-ae68-425f-b67d-db40ed7ad06e" pod="tigera-operator/tigera-operator-6f6897fdc5-55pt2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:40.374736 kubelet[2701]: I0515 12:53:40.374228 2701 kubelet.go:2306] "Pod admission denied" podUID="586e34f1-f659-46eb-ba78-4b9a60f4201a" pod="tigera-operator/tigera-operator-6f6897fdc5-2njsh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:40.408498 kubelet[2701]: E0515 12:53:40.407449 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:40.422794 kubelet[2701]: I0515 12:53:40.422740 2701 kubelet.go:2306] "Pod admission denied" podUID="35d6aae7-e290-4469-9f4d-7eb8885b9a91" pod="tigera-operator/tigera-operator-6f6897fdc5-bmqdt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:40.522792 kubelet[2701]: I0515 12:53:40.522747 2701 kubelet.go:2306] "Pod admission denied" podUID="2ce63344-0b45-4212-a6cc-e8ad6d64fdf6" pod="tigera-operator/tigera-operator-6f6897fdc5-7zq44" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:40.723559 kubelet[2701]: I0515 12:53:40.723230 2701 kubelet.go:2306] "Pod admission denied" podUID="58525aa0-fef9-47c1-a747-4f2fd21ddaee" pod="tigera-operator/tigera-operator-6f6897fdc5-h7tkb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:40.829284 kubelet[2701]: I0515 12:53:40.828921 2701 kubelet.go:2306] "Pod admission denied" podUID="f22667be-7a01-4c6a-bda3-1a697364d875" pod="tigera-operator/tigera-operator-6f6897fdc5-cn7sq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:40.926219 kubelet[2701]: I0515 12:53:40.926157 2701 kubelet.go:2306] "Pod admission denied" podUID="67af94f7-0e02-4ab5-b41b-a1b890f89863" pod="tigera-operator/tigera-operator-6f6897fdc5-n5xdp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:41.125573 kubelet[2701]: I0515 12:53:41.125524 2701 kubelet.go:2306] "Pod admission denied" podUID="bd059a1c-f090-43fe-a74d-c0cee16a1829" pod="tigera-operator/tigera-operator-6f6897fdc5-dp75b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:41.225976 kubelet[2701]: I0515 12:53:41.225921 2701 kubelet.go:2306] "Pod admission denied" podUID="93d25a6c-9c56-4d75-bf94-03010939d7d6" pod="tigera-operator/tigera-operator-6f6897fdc5-p8rz6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:41.273584 kubelet[2701]: I0515 12:53:41.273539 2701 kubelet.go:2306] "Pod admission denied" podUID="dd5e1b9c-9c41-48f2-9d4d-bdf430303924" pod="tigera-operator/tigera-operator-6f6897fdc5-9blf9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:41.373623 kubelet[2701]: I0515 12:53:41.373563 2701 kubelet.go:2306] "Pod admission denied" podUID="d2891d12-8bbc-4ff7-b03b-0a3a75a7d923" pod="tigera-operator/tigera-operator-6f6897fdc5-ckb7l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:41.407950 kubelet[2701]: E0515 12:53:41.407714 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:41.410235 containerd[1536]: time="2025-05-15T12:53:41.410155955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 12:53:41.572264 kubelet[2701]: I0515 12:53:41.572206 2701 kubelet.go:2306] "Pod admission denied" podUID="0e9dbd01-6600-4eaf-8632-7090d075e398" pod="tigera-operator/tigera-operator-6f6897fdc5-qgqs5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:41.675915 kubelet[2701]: I0515 12:53:41.675761 2701 kubelet.go:2306] "Pod admission denied" podUID="121ad944-bddb-4a80-a2c6-82837f4aaf7a" pod="tigera-operator/tigera-operator-6f6897fdc5-lxt2m" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:41.724481 kubelet[2701]: I0515 12:53:41.723665 2701 kubelet.go:2306] "Pod admission denied" podUID="57c5958d-fba2-491a-9f76-51fcdf9be12e" pod="tigera-operator/tigera-operator-6f6897fdc5-d665g" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:41.824163 kubelet[2701]: I0515 12:53:41.824110 2701 kubelet.go:2306] "Pod admission denied" podUID="23d3e504-09ab-459e-a4a6-81d75d6f167f" pod="tigera-operator/tigera-operator-6f6897fdc5-xrdjk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:41.921890 kubelet[2701]: I0515 12:53:41.921851 2701 kubelet.go:2306] "Pod admission denied" podUID="8c7da40b-3f20-410c-844b-56f234c18472" pod="tigera-operator/tigera-operator-6f6897fdc5-sb9zd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:41.964418 kubelet[2701]: I0515 12:53:41.964127 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:53:41.964418 kubelet[2701]: I0515 12:53:41.964360 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:53:41.966395 kubelet[2701]: I0515 12:53:41.966381 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:53:41.978858 kubelet[2701]: I0515 12:53:41.978829 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:53:41.979001 kubelet[2701]: I0515 12:53:41.978910 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-svgwx","calico-system/calico-kube-controllers-6cff7bc5b6-sff94","kube-system/coredns-6f6b679f8f-hfrjd","calico-system/calico-node-qxkgl","calico-system/csi-node-driver-d7hrh","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:53:41.979001 kubelet[2701]: E0515 12:53:41.978938 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:41.979001 kubelet[2701]: E0515 12:53:41.978948 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:41.979001 kubelet[2701]: E0515 12:53:41.978955 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:41.979001 kubelet[2701]: E0515 12:53:41.978962 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:53:41.979001 kubelet[2701]: E0515 12:53:41.978969 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:41.979001 kubelet[2701]: E0515 12:53:41.978979 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:53:41.979001 kubelet[2701]: E0515 12:53:41.978988 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:53:41.979001 kubelet[2701]: E0515 12:53:41.978996 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhz8r" May 15 12:53:41.979001 kubelet[2701]: E0515 12:53:41.979007 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:53:41.979274 kubelet[2701]: E0515 12:53:41.979017 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:53:41.979274 kubelet[2701]: I0515 12:53:41.979026 2701 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 12:53:42.023163 kubelet[2701]: I0515 12:53:42.023119 2701 kubelet.go:2306] "Pod admission denied" podUID="c36f58db-8111-487a-812c-a7942fefd361" pod="tigera-operator/tigera-operator-6f6897fdc5-vnjgv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:42.122477 kubelet[2701]: I0515 12:53:42.121587 2701 kubelet.go:2306] "Pod admission denied" podUID="2a572528-df75-44b8-a356-a78c1db85905" pod="tigera-operator/tigera-operator-6f6897fdc5-g7s49" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:42.173209 kubelet[2701]: I0515 12:53:42.173154 2701 kubelet.go:2306] "Pod admission denied" podUID="370c93af-5403-4bac-b939-088f31069f55" pod="tigera-operator/tigera-operator-6f6897fdc5-jsttx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:42.273650 kubelet[2701]: I0515 12:53:42.273530 2701 kubelet.go:2306] "Pod admission denied" podUID="e115386c-8003-43c9-8a4e-d1bc00b5a061" pod="tigera-operator/tigera-operator-6f6897fdc5-dh9gq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:42.408303 kubelet[2701]: E0515 12:53:42.407735 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:42.408663 containerd[1536]: time="2025-05-15T12:53:42.408632115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:53:42.481583 kubelet[2701]: I0515 12:53:42.481548 2701 kubelet.go:2306] "Pod admission denied" podUID="415761f8-511c-4f2f-adf1-5208e11c1b05" pod="tigera-operator/tigera-operator-6f6897fdc5-lxpnt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:42.489091 containerd[1536]: time="2025-05-15T12:53:42.489044953Z" level=error msg="Failed to destroy network for sandbox \"b3ec564fa88f3587b019926b3d6de8c7c44460ef1163bac0ddea074a083fe82d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:42.492742 containerd[1536]: time="2025-05-15T12:53:42.492404025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3ec564fa88f3587b019926b3d6de8c7c44460ef1163bac0ddea074a083fe82d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:42.491402 systemd[1]: run-netns-cni\x2d078a0fe7\x2dfe96\x2dff09\x2d6b40\x2df58407ebde4f.mount: Deactivated successfully. May 15 12:53:42.494698 kubelet[2701]: E0515 12:53:42.493321 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3ec564fa88f3587b019926b3d6de8c7c44460ef1163bac0ddea074a083fe82d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:42.494698 kubelet[2701]: E0515 12:53:42.493384 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3ec564fa88f3587b019926b3d6de8c7c44460ef1163bac0ddea074a083fe82d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:42.494698 kubelet[2701]: E0515 12:53:42.493406 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3ec564fa88f3587b019926b3d6de8c7c44460ef1163bac0ddea074a083fe82d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:42.494698 kubelet[2701]: E0515 12:53:42.493445 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3ec564fa88f3587b019926b3d6de8c7c44460ef1163bac0ddea074a083fe82d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfrjd" podUID="be5ba310-2fcd-4763-b9cf-8dba85ce0f76" May 15 12:53:42.573192 kubelet[2701]: I0515 12:53:42.573139 2701 kubelet.go:2306] "Pod admission denied" podUID="ca78141f-03d0-4848-ad13-ff8497be3576" pod="tigera-operator/tigera-operator-6f6897fdc5-ghxhn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:42.623148 kubelet[2701]: I0515 12:53:42.623091 2701 kubelet.go:2306] "Pod admission denied" podUID="c3ab14d5-5bbf-469d-9b24-25463cd9d068" pod="tigera-operator/tigera-operator-6f6897fdc5-tfdjt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:42.723096 kubelet[2701]: I0515 12:53:42.723044 2701 kubelet.go:2306] "Pod admission denied" podUID="abb4f946-96bf-4177-ac40-f88f05870bb4" pod="tigera-operator/tigera-operator-6f6897fdc5-wjxrv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:42.923002 kubelet[2701]: I0515 12:53:42.922875 2701 kubelet.go:2306] "Pod admission denied" podUID="626fdb46-a0d4-436d-9fc7-10057f832c06" pod="tigera-operator/tigera-operator-6f6897fdc5-7wkrz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.021838 kubelet[2701]: I0515 12:53:43.021798 2701 kubelet.go:2306] "Pod admission denied" podUID="ff364415-363d-493b-81c0-6dd38540ac61" pod="tigera-operator/tigera-operator-6f6897fdc5-d99mh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.121423 kubelet[2701]: I0515 12:53:43.121370 2701 kubelet.go:2306] "Pod admission denied" podUID="a66352c4-8189-469b-b2aa-8971176f67a8" pod="tigera-operator/tigera-operator-6f6897fdc5-zrjgq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.227537 kubelet[2701]: I0515 12:53:43.227327 2701 kubelet.go:2306] "Pod admission denied" podUID="ce481dbe-2380-4942-914b-6188676a6723" pod="tigera-operator/tigera-operator-6f6897fdc5-dxq59" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.323972 kubelet[2701]: I0515 12:53:43.323917 2701 kubelet.go:2306] "Pod admission denied" podUID="ed598f66-29fc-40e0-8635-ca341c6de586" pod="tigera-operator/tigera-operator-6f6897fdc5-flm4p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.407884 containerd[1536]: time="2025-05-15T12:53:43.407835429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:53:43.433108 kubelet[2701]: I0515 12:53:43.433066 2701 kubelet.go:2306] "Pod admission denied" podUID="8100ee0c-b726-4991-b117-a59c50cb0bd1" pod="tigera-operator/tigera-operator-6f6897fdc5-8m5h5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.478486 kubelet[2701]: I0515 12:53:43.478305 2701 kubelet.go:2306] "Pod admission denied" podUID="8062eaab-57ef-4b98-b062-df6d004d0c27" pod="tigera-operator/tigera-operator-6f6897fdc5-9vcgm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.481729 containerd[1536]: time="2025-05-15T12:53:43.481619460Z" level=error msg="Failed to destroy network for sandbox \"6ec2c8107c0fd01cbe2b343ee4590337bf5895beea8a78d5d8f0fa847a5e0235\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:43.485342 systemd[1]: run-netns-cni\x2db21520cb\x2dd85f\x2daec4\x2daa2e\x2d0bed81d89189.mount: Deactivated successfully. May 15 12:53:43.489324 containerd[1536]: time="2025-05-15T12:53:43.489245615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec2c8107c0fd01cbe2b343ee4590337bf5895beea8a78d5d8f0fa847a5e0235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:43.490308 kubelet[2701]: E0515 12:53:43.490268 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec2c8107c0fd01cbe2b343ee4590337bf5895beea8a78d5d8f0fa847a5e0235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:43.490408 kubelet[2701]: E0515 12:53:43.490331 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec2c8107c0fd01cbe2b343ee4590337bf5895beea8a78d5d8f0fa847a5e0235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:43.490408 kubelet[2701]: E0515 12:53:43.490354 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec2c8107c0fd01cbe2b343ee4590337bf5895beea8a78d5d8f0fa847a5e0235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:43.490408 kubelet[2701]: E0515 12:53:43.490388 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ec2c8107c0fd01cbe2b343ee4590337bf5895beea8a78d5d8f0fa847a5e0235\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:53:43.580290 kubelet[2701]: I0515 12:53:43.580240 2701 kubelet.go:2306] "Pod admission denied" podUID="17c3ed82-526a-4998-a76f-2dd9a50d17b0" pod="tigera-operator/tigera-operator-6f6897fdc5-4rtql" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.672833 kubelet[2701]: I0515 12:53:43.672779 2701 kubelet.go:2306] "Pod admission denied" podUID="05498df8-8016-45e1-beff-95fb10b2f896" pod="tigera-operator/tigera-operator-6f6897fdc5-qlxz2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.777868 kubelet[2701]: I0515 12:53:43.777754 2701 kubelet.go:2306] "Pod admission denied" podUID="3483c4ed-768e-49b2-a7d7-fa7898a413d1" pod="tigera-operator/tigera-operator-6f6897fdc5-qmjls" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.872182 kubelet[2701]: I0515 12:53:43.872121 2701 kubelet.go:2306] "Pod admission denied" podUID="5406a6cb-1cf1-4a2c-a2d3-14d51cbbcaf6" pod="tigera-operator/tigera-operator-6f6897fdc5-bxfvw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:43.923518 kubelet[2701]: I0515 12:53:43.923452 2701 kubelet.go:2306] "Pod admission denied" podUID="5ab8c618-9dca-4438-8779-11da28cdbcc3" pod="tigera-operator/tigera-operator-6f6897fdc5-wspmm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:44.024785 kubelet[2701]: I0515 12:53:44.024733 2701 kubelet.go:2306] "Pod admission denied" podUID="14e5e50c-53d2-4c51-a1f6-7f34c23e29ce" pod="tigera-operator/tigera-operator-6f6897fdc5-hwwzz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:44.129887 kubelet[2701]: I0515 12:53:44.129843 2701 kubelet.go:2306] "Pod admission denied" podUID="41ea5be7-2f1f-4305-b2c4-6a8fe725ace2" pod="tigera-operator/tigera-operator-6f6897fdc5-9cf49" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:44.176574 kubelet[2701]: I0515 12:53:44.176335 2701 kubelet.go:2306] "Pod admission denied" podUID="912777f1-e47a-4b99-9bad-4f8002f93232" pod="tigera-operator/tigera-operator-6f6897fdc5-297t5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:44.277162 kubelet[2701]: I0515 12:53:44.277107 2701 kubelet.go:2306] "Pod admission denied" podUID="f87251dd-796c-4ff3-a3ab-6a4b3d849a8e" pod="tigera-operator/tigera-operator-6f6897fdc5-lk5zj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:44.479787 kubelet[2701]: I0515 12:53:44.479262 2701 kubelet.go:2306] "Pod admission denied" podUID="b4db2baa-070f-4c7f-a0e2-19384dd3e24e" pod="tigera-operator/tigera-operator-6f6897fdc5-zsvls" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:44.638073 kubelet[2701]: I0515 12:53:44.638005 2701 kubelet.go:2306] "Pod admission denied" podUID="a899944e-625a-44f7-95f2-5b3c4ce7d291" pod="tigera-operator/tigera-operator-6f6897fdc5-ks9z8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:44.671304 kubelet[2701]: I0515 12:53:44.671260 2701 kubelet.go:2306] "Pod admission denied" podUID="3a0207e4-19e3-40a5-8a13-d3b7ebff1f84" pod="tigera-operator/tigera-operator-6f6897fdc5-9wktw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:44.776106 kubelet[2701]: I0515 12:53:44.775854 2701 kubelet.go:2306] "Pod admission denied" podUID="bf337df1-e261-4854-a97a-4eabb91b7e76" pod="tigera-operator/tigera-operator-6f6897fdc5-n7b6z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:44.875484 kubelet[2701]: I0515 12:53:44.874771 2701 kubelet.go:2306] "Pod admission denied" podUID="beb83010-6f17-408c-a393-58f8eded6a95" pod="tigera-operator/tigera-operator-6f6897fdc5-5vlwr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:44.972396 kubelet[2701]: I0515 12:53:44.972335 2701 kubelet.go:2306] "Pod admission denied" podUID="b83877a0-d6ce-4263-99a8-839d8f626cd8" pod="tigera-operator/tigera-operator-6f6897fdc5-5vcxn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:45.073964 kubelet[2701]: I0515 12:53:45.073919 2701 kubelet.go:2306] "Pod admission denied" podUID="44f8cafc-168e-4839-b55f-bfdf0bd9aa36" pod="tigera-operator/tigera-operator-6f6897fdc5-7k5b8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:45.275639 kubelet[2701]: I0515 12:53:45.275380 2701 kubelet.go:2306] "Pod admission denied" podUID="cc2bb789-7317-4b1a-b356-4a4635dc7b3b" pod="tigera-operator/tigera-operator-6f6897fdc5-597vd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:45.410788 containerd[1536]: time="2025-05-15T12:53:45.410667349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:53:45.418589 kubelet[2701]: I0515 12:53:45.415389 2701 kubelet.go:2306] "Pod admission denied" podUID="4489a6d6-050c-404e-a7b2-61a5f2651ae4" pod="tigera-operator/tigera-operator-6f6897fdc5-fnwkg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:45.476847 kubelet[2701]: I0515 12:53:45.476789 2701 kubelet.go:2306] "Pod admission denied" podUID="029c9db9-aa72-4bc3-b830-a2115e0eb37a" pod="tigera-operator/tigera-operator-6f6897fdc5-55drn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:45.517749 containerd[1536]: time="2025-05-15T12:53:45.516434819Z" level=error msg="Failed to destroy network for sandbox \"8c3a163c02cf4f058cc6f7e22351e2e534e6b5475308efc7f6e706328e27b0cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:45.519068 systemd[1]: run-netns-cni\x2da92cda47\x2dc066\x2d62cb\x2dd3f4\x2d51bcd7b7e1c2.mount: Deactivated successfully. May 15 12:53:45.520901 containerd[1536]: time="2025-05-15T12:53:45.520634172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3a163c02cf4f058cc6f7e22351e2e534e6b5475308efc7f6e706328e27b0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:45.523419 kubelet[2701]: E0515 12:53:45.523348 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3a163c02cf4f058cc6f7e22351e2e534e6b5475308efc7f6e706328e27b0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:45.523506 kubelet[2701]: E0515 12:53:45.523482 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3a163c02cf4f058cc6f7e22351e2e534e6b5475308efc7f6e706328e27b0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:45.523743 kubelet[2701]: E0515 12:53:45.523509 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3a163c02cf4f058cc6f7e22351e2e534e6b5475308efc7f6e706328e27b0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:45.523819 kubelet[2701]: E0515 12:53:45.523790 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c3a163c02cf4f058cc6f7e22351e2e534e6b5475308efc7f6e706328e27b0cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:53:45.576013 kubelet[2701]: I0515 12:53:45.575954 2701 kubelet.go:2306] "Pod admission denied" podUID="b8f2d8f2-6f93-452f-a420-b9710eb5cad5" pod="tigera-operator/tigera-operator-6f6897fdc5-hq5ns" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:45.672286 kubelet[2701]: I0515 12:53:45.672176 2701 kubelet.go:2306] "Pod admission denied" podUID="d1a123ac-96af-420a-bba6-7e44315dc895" pod="tigera-operator/tigera-operator-6f6897fdc5-8q7zg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:45.782615 kubelet[2701]: I0515 12:53:45.781851 2701 kubelet.go:2306] "Pod admission denied" podUID="465c6816-0bfe-4472-b52f-0ad61754fb53" pod="tigera-operator/tigera-operator-6f6897fdc5-x9chj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:45.873879 kubelet[2701]: I0515 12:53:45.873815 2701 kubelet.go:2306] "Pod admission denied" podUID="99bf23fe-3e49-46cc-927b-bf8111fc3934" pod="tigera-operator/tigera-operator-6f6897fdc5-zdkc2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:45.975447 kubelet[2701]: I0515 12:53:45.975318 2701 kubelet.go:2306] "Pod admission denied" podUID="a3487867-2fc7-479e-94b6-414264cfc754" pod="tigera-operator/tigera-operator-6f6897fdc5-p6wnl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:46.075052 kubelet[2701]: I0515 12:53:46.075005 2701 kubelet.go:2306] "Pod admission denied" podUID="2ad1e794-a208-49b3-a793-eb11ebeffeb0" pod="tigera-operator/tigera-operator-6f6897fdc5-5j7ld" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:46.176484 kubelet[2701]: I0515 12:53:46.175837 2701 kubelet.go:2306] "Pod admission denied" podUID="8a6646f9-fd20-4e25-b74f-df304105c2a9" pod="tigera-operator/tigera-operator-6f6897fdc5-7dv22" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:46.275028 kubelet[2701]: I0515 12:53:46.274579 2701 kubelet.go:2306] "Pod admission denied" podUID="c73358fe-83d3-4e4a-9ecc-3aeadc52c433" pod="tigera-operator/tigera-operator-6f6897fdc5-zc8rp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:46.373468 kubelet[2701]: I0515 12:53:46.373398 2701 kubelet.go:2306] "Pod admission denied" podUID="58ea0fca-7d92-41e7-8033-f8a95f16a33b" pod="tigera-operator/tigera-operator-6f6897fdc5-5qb6d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:46.407488 kubelet[2701]: E0515 12:53:46.407367 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:46.408389 containerd[1536]: time="2025-05-15T12:53:46.408131979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:53:46.476781 kubelet[2701]: I0515 12:53:46.476735 2701 kubelet.go:2306] "Pod admission denied" podUID="3a43cf1c-90d5-4b2c-9ca6-80ef049a5e8c" pod="tigera-operator/tigera-operator-6f6897fdc5-nhb2v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:46.487786 containerd[1536]: time="2025-05-15T12:53:46.487521661Z" level=error msg="Failed to destroy network for sandbox \"5b9e3b1f87fdcac690cb6f6c73d92ecd41719dae71ff271b0fa0ef0be29553f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:46.490340 systemd[1]: run-netns-cni\x2d9813d12c\x2d8cdb\x2d66dd\x2deeac\x2d642cc000e466.mount: Deactivated successfully. May 15 12:53:46.492600 containerd[1536]: time="2025-05-15T12:53:46.492503127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b9e3b1f87fdcac690cb6f6c73d92ecd41719dae71ff271b0fa0ef0be29553f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:46.492724 kubelet[2701]: E0515 12:53:46.492681 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b9e3b1f87fdcac690cb6f6c73d92ecd41719dae71ff271b0fa0ef0be29553f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:46.492779 kubelet[2701]: E0515 12:53:46.492749 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b9e3b1f87fdcac690cb6f6c73d92ecd41719dae71ff271b0fa0ef0be29553f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:46.492779 kubelet[2701]: E0515 12:53:46.492770 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b9e3b1f87fdcac690cb6f6c73d92ecd41719dae71ff271b0fa0ef0be29553f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:46.492821 kubelet[2701]: E0515 12:53:46.492804 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b9e3b1f87fdcac690cb6f6c73d92ecd41719dae71ff271b0fa0ef0be29553f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-svgwx" podUID="a44ca807-6ed0-447f-9c4f-1a10e61b025b" May 15 12:53:46.679203 kubelet[2701]: I0515 12:53:46.679158 2701 kubelet.go:2306] "Pod admission denied" podUID="80c5d1d1-385f-4818-8e2f-84f0aef21a81" pod="tigera-operator/tigera-operator-6f6897fdc5-2zplq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:46.779519 kubelet[2701]: I0515 12:53:46.779481 2701 kubelet.go:2306] "Pod admission denied" podUID="fd72bd2c-ffdc-4aaf-8120-219aba7d0308" pod="tigera-operator/tigera-operator-6f6897fdc5-r9tb4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:46.874812 kubelet[2701]: I0515 12:53:46.874746 2701 kubelet.go:2306] "Pod admission denied" podUID="f7537ea1-9718-4c0a-893e-58d575ad8dbd" pod="tigera-operator/tigera-operator-6f6897fdc5-wdtxn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:46.974043 kubelet[2701]: I0515 12:53:46.973923 2701 kubelet.go:2306] "Pod admission denied" podUID="718270c3-8d86-4003-b30e-c8dfb5bdf45b" pod="tigera-operator/tigera-operator-6f6897fdc5-9xpbj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:47.077245 kubelet[2701]: I0515 12:53:47.077194 2701 kubelet.go:2306] "Pod admission denied" podUID="112b1d9b-023b-461a-8ce7-799f4cba9065" pod="tigera-operator/tigera-operator-6f6897fdc5-94mqc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:47.182874 kubelet[2701]: I0515 12:53:47.182390 2701 kubelet.go:2306] "Pod admission denied" podUID="5540c836-06bc-48b2-83b7-8675e625cddb" pod="tigera-operator/tigera-operator-6f6897fdc5-g85kj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:47.276843 kubelet[2701]: I0515 12:53:47.276715 2701 kubelet.go:2306] "Pod admission denied" podUID="4b34acd5-da3f-4028-a62a-35ee5367981f" pod="tigera-operator/tigera-operator-6f6897fdc5-z2mtp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:47.374254 kubelet[2701]: I0515 12:53:47.374198 2701 kubelet.go:2306] "Pod admission denied" podUID="d480b89e-c5de-4d05-b786-90ba0dcf6cc3" pod="tigera-operator/tigera-operator-6f6897fdc5-q6snh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:47.476145 kubelet[2701]: I0515 12:53:47.476092 2701 kubelet.go:2306] "Pod admission denied" podUID="dc6e0aad-ad56-4bcb-a79e-5aa149a46d07" pod="tigera-operator/tigera-operator-6f6897fdc5-qrddf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:47.575216 kubelet[2701]: I0515 12:53:47.575170 2701 kubelet.go:2306] "Pod admission denied" podUID="cf4e36ec-c8aa-4d99-804c-46fbb220779d" pod="tigera-operator/tigera-operator-6f6897fdc5-85ggs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:47.673810 kubelet[2701]: I0515 12:53:47.673555 2701 kubelet.go:2306] "Pod admission denied" podUID="86e378d4-13e1-42f2-bf89-bac9a9f9ce41" pod="tigera-operator/tigera-operator-6f6897fdc5-ctgk9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:47.777236 kubelet[2701]: I0515 12:53:47.777184 2701 kubelet.go:2306] "Pod admission denied" podUID="0047097d-6b33-4c32-b1ae-76603ac538bf" pod="tigera-operator/tigera-operator-6f6897fdc5-wxs49" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:47.874418 kubelet[2701]: I0515 12:53:47.874059 2701 kubelet.go:2306] "Pod admission denied" podUID="64109887-f778-4a38-85d8-f5fa625f9d07" pod="tigera-operator/tigera-operator-6f6897fdc5-5j6nh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:48.074299 kubelet[2701]: I0515 12:53:48.074245 2701 kubelet.go:2306] "Pod admission denied" podUID="2c9fd1ac-5104-40f4-bdb0-6dfd6518b9cc" pod="tigera-operator/tigera-operator-6f6897fdc5-l74tj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:48.174722 kubelet[2701]: I0515 12:53:48.174474 2701 kubelet.go:2306] "Pod admission denied" podUID="da24e58c-589a-4deb-ae17-ded2dcf50a5f" pod="tigera-operator/tigera-operator-6f6897fdc5-2qt8l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:48.230543 kubelet[2701]: I0515 12:53:48.230447 2701 kubelet.go:2306] "Pod admission denied" podUID="16812409-5af5-4bfa-b33b-e462234f5a3b" pod="tigera-operator/tigera-operator-6f6897fdc5-t8spx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:48.325590 kubelet[2701]: I0515 12:53:48.325530 2701 kubelet.go:2306] "Pod admission denied" podUID="d7d85c92-d64f-4049-9a1b-ed105e161bf7" pod="tigera-operator/tigera-operator-6f6897fdc5-wbnvb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:48.525565 kubelet[2701]: I0515 12:53:48.525274 2701 kubelet.go:2306] "Pod admission denied" podUID="c5118121-0f9d-4b1f-be5e-78caccf1497d" pod="tigera-operator/tigera-operator-6f6897fdc5-pnq5r" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:48.623349 kubelet[2701]: I0515 12:53:48.623302 2701 kubelet.go:2306] "Pod admission denied" podUID="8fe85005-bea4-435a-986d-8f481bad10fe" pod="tigera-operator/tigera-operator-6f6897fdc5-bccsq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:48.724405 kubelet[2701]: I0515 12:53:48.724356 2701 kubelet.go:2306] "Pod admission denied" podUID="18428fa7-f061-4a11-9f11-6293d4efebb6" pod="tigera-operator/tigera-operator-6f6897fdc5-6s6ff" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:48.824432 kubelet[2701]: I0515 12:53:48.824386 2701 kubelet.go:2306] "Pod admission denied" podUID="1c5af48a-56db-4afa-bf0f-ab8bef57e4c4" pod="tigera-operator/tigera-operator-6f6897fdc5-fxf2s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:48.923929 kubelet[2701]: I0515 12:53:48.923873 2701 kubelet.go:2306] "Pod admission denied" podUID="21c582ab-682b-4065-b0d5-d17ced2842ad" pod="tigera-operator/tigera-operator-6f6897fdc5-mk92b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:49.027237 kubelet[2701]: I0515 12:53:49.026185 2701 kubelet.go:2306] "Pod admission denied" podUID="6b499017-a600-46fc-b036-f94f541a564e" pod="tigera-operator/tigera-operator-6f6897fdc5-5crdq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:49.123237 kubelet[2701]: I0515 12:53:49.123102 2701 kubelet.go:2306] "Pod admission denied" podUID="74718b8c-b2c9-40b2-b022-8e57cf9803e6" pod="tigera-operator/tigera-operator-6f6897fdc5-vngrt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:49.222829 kubelet[2701]: I0515 12:53:49.222785 2701 kubelet.go:2306] "Pod admission denied" podUID="60d3d7bc-ecd7-450e-984a-5a5b4a2ebf62" pod="tigera-operator/tigera-operator-6f6897fdc5-kzg7v" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:49.323979 kubelet[2701]: I0515 12:53:49.323934 2701 kubelet.go:2306] "Pod admission denied" podUID="297bdeac-7502-4897-b2fe-b54c1c67ae53" pod="tigera-operator/tigera-operator-6f6897fdc5-dnvn9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:49.524590 kubelet[2701]: I0515 12:53:49.523414 2701 kubelet.go:2306] "Pod admission denied" podUID="0128e3d0-9a89-49b5-af2e-7bfa8fb3488b" pod="tigera-operator/tigera-operator-6f6897fdc5-nnq7x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:49.624858 kubelet[2701]: I0515 12:53:49.624814 2701 kubelet.go:2306] "Pod admission denied" podUID="411f1dce-dccd-4932-a72d-68cab6af85e2" pod="tigera-operator/tigera-operator-6f6897fdc5-6cjrr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:49.722759 kubelet[2701]: I0515 12:53:49.722710 2701 kubelet.go:2306] "Pod admission denied" podUID="d37e0af8-1650-4059-bd78-734eb71adf9c" pod="tigera-operator/tigera-operator-6f6897fdc5-zm5s5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:49.926782 kubelet[2701]: I0515 12:53:49.926732 2701 kubelet.go:2306] "Pod admission denied" podUID="8f92c802-89cc-4fd0-b406-408370cb1541" pod="tigera-operator/tigera-operator-6f6897fdc5-bc4zw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:50.028022 kubelet[2701]: I0515 12:53:50.027291 2701 kubelet.go:2306] "Pod admission denied" podUID="0193629a-fc1f-4d4f-9a61-049779953a5b" pod="tigera-operator/tigera-operator-6f6897fdc5-8n2j5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:50.125003 kubelet[2701]: I0515 12:53:50.124941 2701 kubelet.go:2306] "Pod admission denied" podUID="e60e1aa9-85ea-4d98-bfd7-77fd2f70ed4d" pod="tigera-operator/tigera-operator-6f6897fdc5-fspds" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:50.326138 kubelet[2701]: I0515 12:53:50.326091 2701 kubelet.go:2306] "Pod admission denied" podUID="a6ad74ef-f2cc-44d3-8bdb-1d12922b1b53" pod="tigera-operator/tigera-operator-6f6897fdc5-68kvp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:50.428929 kubelet[2701]: I0515 12:53:50.428881 2701 kubelet.go:2306] "Pod admission denied" podUID="9df7e2ed-31f9-435c-80aa-06685ad50985" pod="tigera-operator/tigera-operator-6f6897fdc5-jg7r7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:50.525767 kubelet[2701]: I0515 12:53:50.525715 2701 kubelet.go:2306] "Pod admission denied" podUID="d6309fed-2957-4f76-88b5-c9a1219485f6" pod="tigera-operator/tigera-operator-6f6897fdc5-whjbz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:50.623534 kubelet[2701]: I0515 12:53:50.623406 2701 kubelet.go:2306] "Pod admission denied" podUID="d3c6ab3f-c1f9-49ec-b014-a4d8b335b1ef" pod="tigera-operator/tigera-operator-6f6897fdc5-b2z7c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:50.673197 kubelet[2701]: I0515 12:53:50.673151 2701 kubelet.go:2306] "Pod admission denied" podUID="91ad9b31-91ec-4bbd-8752-7e3635ecf312" pod="tigera-operator/tigera-operator-6f6897fdc5-tzg6q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:50.776503 kubelet[2701]: I0515 12:53:50.776435 2701 kubelet.go:2306] "Pod admission denied" podUID="28c7e8c3-f079-4a49-9fce-c45d10fc0375" pod="tigera-operator/tigera-operator-6f6897fdc5-tsx24" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:50.875151 kubelet[2701]: I0515 12:53:50.874893 2701 kubelet.go:2306] "Pod admission denied" podUID="280142a3-30d3-4af8-b65b-cfe98639d54e" pod="tigera-operator/tigera-operator-6f6897fdc5-tdbsw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:50.976589 kubelet[2701]: I0515 12:53:50.976545 2701 kubelet.go:2306] "Pod admission denied" podUID="123b7279-a43d-40f5-bcd1-2f2f65f4897b" pod="tigera-operator/tigera-operator-6f6897fdc5-9x9kn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:51.074110 kubelet[2701]: I0515 12:53:51.074065 2701 kubelet.go:2306] "Pod admission denied" podUID="fc8c74ae-a206-4f15-927c-804034eb0d88" pod="tigera-operator/tigera-operator-6f6897fdc5-lxf5k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:51.177816 kubelet[2701]: I0515 12:53:51.177495 2701 kubelet.go:2306] "Pod admission denied" podUID="ac49b033-4d7f-4be9-a7e5-0037263ee412" pod="tigera-operator/tigera-operator-6f6897fdc5-rp9m9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:51.277126 kubelet[2701]: I0515 12:53:51.277072 2701 kubelet.go:2306] "Pod admission denied" podUID="981fa049-2408-4578-8b33-26a3a52dbf5f" pod="tigera-operator/tigera-operator-6f6897fdc5-9qq6z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:51.380052 kubelet[2701]: I0515 12:53:51.379992 2701 kubelet.go:2306] "Pod admission denied" podUID="42a3e1e1-91ef-4d1b-9b3a-b784c78eb9e7" pod="tigera-operator/tigera-operator-6f6897fdc5-qqtmx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:51.523945 systemd[1]: Started sshd@7-172.232.9.197:22-91.238.181.92:7053.service - OpenSSH per-connection server daemon (91.238.181.92:7053). May 15 12:53:51.553304 sshd[4225]: banner exchange: Connection from 91.238.181.92 port 7053: invalid format May 15 12:53:51.554145 systemd[1]: sshd@7-172.232.9.197:22-91.238.181.92:7053.service: Deactivated successfully. May 15 12:53:51.578589 kubelet[2701]: I0515 12:53:51.578545 2701 kubelet.go:2306] "Pod admission denied" podUID="c2ee3c65-3995-4dbe-bc87-b38d6ebd6a2d" pod="tigera-operator/tigera-operator-6f6897fdc5-d7w6l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:51.675754 kubelet[2701]: I0515 12:53:51.675694 2701 kubelet.go:2306] "Pod admission denied" podUID="cdd0177a-4ff8-49ea-8edc-cd2b03da0be8" pod="tigera-operator/tigera-operator-6f6897fdc5-cbk4l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:51.751014 systemd[1]: Started sshd@8-172.232.9.197:22-91.238.181.92:8483.service - OpenSSH per-connection server daemon (91.238.181.92:8483). May 15 12:53:51.780131 sshd[4229]: banner exchange: Connection from 91.238.181.92 port 8483: invalid format May 15 12:53:51.782061 systemd[1]: sshd@8-172.232.9.197:22-91.238.181.92:8483.service: Deactivated successfully. May 15 12:53:51.788004 kubelet[2701]: I0515 12:53:51.787970 2701 kubelet.go:2306] "Pod admission denied" podUID="bd8cf614-e1b3-4678-bf83-05e46012ff45" pod="tigera-operator/tigera-operator-6f6897fdc5-gnwbh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:51.986858 systemd[1]: Started sshd@9-172.232.9.197:22-91.238.181.92:9483.service - OpenSSH per-connection server daemon (91.238.181.92:9483). May 15 12:53:52.010756 kubelet[2701]: I0515 12:53:52.010697 2701 kubelet.go:2306] "Pod admission denied" podUID="1b4ca52b-e168-4a0b-9344-7ec1cb55feef" pod="tigera-operator/tigera-operator-6f6897fdc5-5qrg5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:52.041059 sshd[4233]: banner exchange: Connection from 91.238.181.92 port 9483: invalid format May 15 12:53:52.043149 kubelet[2701]: I0515 12:53:52.043057 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:53:52.043149 kubelet[2701]: I0515 12:53:52.043089 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:53:52.043271 systemd[1]: sshd@9-172.232.9.197:22-91.238.181.92:9483.service: Deactivated successfully. May 15 12:53:52.051055 kubelet[2701]: I0515 12:53:52.051032 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:53:52.068283 kubelet[2701]: I0515 12:53:52.068223 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:53:52.068618 kubelet[2701]: I0515 12:53:52.068433 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6cff7bc5b6-sff94","kube-system/coredns-6f6b679f8f-hfrjd","kube-system/coredns-6f6b679f8f-svgwx","calico-system/csi-node-driver-d7hrh","calico-system/calico-node-qxkgl","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:53:52.068618 kubelet[2701]: E0515 12:53:52.068519 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:52.068618 kubelet[2701]: E0515 12:53:52.068531 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:52.068618 kubelet[2701]: E0515 12:53:52.068537 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:52.068618 kubelet[2701]: E0515 12:53:52.068545 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:52.068618 kubelet[2701]: E0515 12:53:52.068551 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:53:52.068618 kubelet[2701]: E0515 12:53:52.068561 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:53:52.068618 kubelet[2701]: E0515 12:53:52.068569 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:53:52.068618 kubelet[2701]: E0515 12:53:52.068577 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhz8r" May 15 12:53:52.068618 kubelet[2701]: E0515 12:53:52.068585 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:53:52.068618 kubelet[2701]: E0515 12:53:52.068592 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:53:52.068618 kubelet[2701]: I0515 12:53:52.068601 2701 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 12:53:52.230398 kubelet[2701]: I0515 12:53:52.230338 2701 kubelet.go:2306] "Pod admission denied" podUID="b7484b59-7ec9-42af-9c53-938589bdb3c8" pod="tigera-operator/tigera-operator-6f6897fdc5-vk9ng" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:52.331168 kubelet[2701]: I0515 12:53:52.331134 2701 kubelet.go:2306] "Pod admission denied" podUID="2aa1760e-1ee6-431b-9c5d-3e664c54cbba" pod="tigera-operator/tigera-operator-6f6897fdc5-4v4vk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:52.435727 kubelet[2701]: I0515 12:53:52.435664 2701 kubelet.go:2306] "Pod admission denied" podUID="ba113e55-f692-49ab-ba4a-a3dafa786ffc" pod="tigera-operator/tigera-operator-6f6897fdc5-wcm5n" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:52.532389 kubelet[2701]: I0515 12:53:52.532353 2701 kubelet.go:2306] "Pod admission denied" podUID="2f0a612b-1105-486e-a5fc-1658fd5a9ae7" pod="tigera-operator/tigera-operator-6f6897fdc5-bhrr8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:52.630089 kubelet[2701]: I0515 12:53:52.629720 2701 kubelet.go:2306] "Pod admission denied" podUID="503394b2-0691-47ee-90ba-9cef617e90a2" pod="tigera-operator/tigera-operator-6f6897fdc5-hwqzk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:52.748831 kubelet[2701]: I0515 12:53:52.748799 2701 kubelet.go:2306] "Pod admission denied" podUID="4cb83daa-09d1-45b4-86de-284cfd19c014" pod="tigera-operator/tigera-operator-6f6897fdc5-j8f85" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:52.832389 kubelet[2701]: I0515 12:53:52.832351 2701 kubelet.go:2306] "Pod admission denied" podUID="1e02ffb6-4936-45da-9451-4a9cf6f8d87d" pod="tigera-operator/tigera-operator-6f6897fdc5-6mjrn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:52.928768 kubelet[2701]: I0515 12:53:52.928398 2701 kubelet.go:2306] "Pod admission denied" podUID="c854ab73-b567-4bb9-97f9-a1ae6972cf0c" pod="tigera-operator/tigera-operator-6f6897fdc5-4bzs2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:53.033414 kubelet[2701]: I0515 12:53:53.033362 2701 kubelet.go:2306] "Pod admission denied" podUID="69d0e9e9-952e-40b5-8f86-94ebebaedee6" pod="tigera-operator/tigera-operator-6f6897fdc5-94nt9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:53.131322 kubelet[2701]: I0515 12:53:53.131274 2701 kubelet.go:2306] "Pod admission denied" podUID="6147cd9d-0996-40c7-93fe-d1c575a8e2f9" pod="tigera-operator/tigera-operator-6f6897fdc5-b2b79" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:53.232069 kubelet[2701]: I0515 12:53:53.231291 2701 kubelet.go:2306] "Pod admission denied" podUID="d0e4e41e-e840-473e-bfe8-114492bd75ab" pod="tigera-operator/tigera-operator-6f6897fdc5-smwgk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:53.331053 kubelet[2701]: I0515 12:53:53.331000 2701 kubelet.go:2306] "Pod admission denied" podUID="49476338-7e8c-4b1a-97a4-118fdab32f9f" pod="tigera-operator/tigera-operator-6f6897fdc5-lf6jc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:53.430921 kubelet[2701]: I0515 12:53:53.430860 2701 kubelet.go:2306] "Pod admission denied" podUID="9f4b4d41-e81a-4351-b127-3be7431af425" pod="tigera-operator/tigera-operator-6f6897fdc5-4q5pb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:53.547080 kubelet[2701]: I0515 12:53:53.546684 2701 kubelet.go:2306] "Pod admission denied" podUID="409a5fb4-afac-4731-98fd-b16dd8d5f477" pod="tigera-operator/tigera-operator-6f6897fdc5-4dzp9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:53.732308 kubelet[2701]: I0515 12:53:53.732249 2701 kubelet.go:2306] "Pod admission denied" podUID="769171f3-e766-4390-a258-cafae576a255" pod="tigera-operator/tigera-operator-6f6897fdc5-9mq4z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:53.839774 kubelet[2701]: I0515 12:53:53.839649 2701 kubelet.go:2306] "Pod admission denied" podUID="f59dc388-b267-4947-b50f-023791c31e03" pod="tigera-operator/tigera-operator-6f6897fdc5-9gpvp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:53.933260 kubelet[2701]: I0515 12:53:53.933220 2701 kubelet.go:2306] "Pod admission denied" podUID="4c879514-3247-410f-97fd-9f59da5d569e" pod="tigera-operator/tigera-operator-6f6897fdc5-qvlcv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:54.034749 kubelet[2701]: I0515 12:53:54.034309 2701 kubelet.go:2306] "Pod admission denied" podUID="2bb79057-fd2c-4835-af41-2c7087bbe21b" pod="tigera-operator/tigera-operator-6f6897fdc5-qc6rg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:54.086948 kubelet[2701]: I0515 12:53:54.086911 2701 kubelet.go:2306] "Pod admission denied" podUID="27738481-2157-45dd-9b92-a8eec52a41da" pod="tigera-operator/tigera-operator-6f6897fdc5-jh25n" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:54.181575 kubelet[2701]: I0515 12:53:54.181034 2701 kubelet.go:2306] "Pod admission denied" podUID="7a1ca68c-f7cd-4916-8c66-ac4549137de3" pod="tigera-operator/tigera-operator-6f6897fdc5-pzpjf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:54.287568 kubelet[2701]: I0515 12:53:54.287452 2701 kubelet.go:2306] "Pod admission denied" podUID="824924f5-5c3a-4a14-ad35-e704df047947" pod="tigera-operator/tigera-operator-6f6897fdc5-75qdh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:54.384199 kubelet[2701]: I0515 12:53:54.383777 2701 kubelet.go:2306] "Pod admission denied" podUID="f34af118-2efc-4003-bf76-d7028a16203d" pod="tigera-operator/tigera-operator-6f6897fdc5-7p5p4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:54.482942 kubelet[2701]: I0515 12:53:54.482671 2701 kubelet.go:2306] "Pod admission denied" podUID="5fc4d088-e619-4861-9a3a-cc715534a444" pod="tigera-operator/tigera-operator-6f6897fdc5-2fpcw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:54.586320 kubelet[2701]: I0515 12:53:54.586279 2701 kubelet.go:2306] "Pod admission denied" podUID="0865e682-289b-4937-9066-7437ce2fa7d9" pod="tigera-operator/tigera-operator-6f6897fdc5-79w9t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:54.685227 kubelet[2701]: I0515 12:53:54.685169 2701 kubelet.go:2306] "Pod admission denied" podUID="46953308-5e90-43ba-af26-7e7ba96fcbb7" pod="tigera-operator/tigera-operator-6f6897fdc5-6n4sp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:54.785494 kubelet[2701]: I0515 12:53:54.785183 2701 kubelet.go:2306] "Pod admission denied" podUID="4331804a-b418-48ed-beb4-2165eaefb24d" pod="tigera-operator/tigera-operator-6f6897fdc5-4cscl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:54.914206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099815952.mount: Deactivated successfully. May 15 12:53:54.915072 containerd[1536]: time="2025-05-15T12:53:54.914171664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4099815952: write /var/lib/containerd/tmpmounts/containerd-mount4099815952/usr/lib/calico/bpf/from_nat_info.o: no space left on device" May 15 12:53:54.915072 containerd[1536]: time="2025-05-15T12:53:54.914503444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 12:53:54.915528 kubelet[2701]: E0515 12:53:54.914999 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4099815952: write /var/lib/containerd/tmpmounts/containerd-mount4099815952/usr/lib/calico/bpf/from_nat_info.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 12:53:54.915528 kubelet[2701]: E0515 12:53:54.915061 2701 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4099815952: write /var/lib/containerd/tmpmounts/containerd-mount4099815952/usr/lib/calico/bpf/from_nat_info.o: no space left on device" image="ghcr.io/flatcar/calico/node:v3.29.3" May 15 12:53:54.915861 kubelet[2701]: E0515 12:53:54.915246 2701 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zm75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-qxkgl_calico-system(f69f6551-7509-4bf1-a40a-1170e25b66f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.29.3\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4099815952: write /var/lib/containerd/tmpmounts/containerd-mount4099815952/usr/lib/calico/bpf/from_nat_info.o: no space left on device" logger="UnhandledError" May 15 12:53:54.918363 kubelet[2701]: E0515 12:53:54.918315 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\": failed to extract layer sha256:55c8cc0817d5128b2372fb799235750c10d753fc23543c605ef65dd4ae80c9b1: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4099815952: write /var/lib/containerd/tmpmounts/containerd-mount4099815952/usr/lib/calico/bpf/from_nat_info.o: no space left on device\"" pod="calico-system/calico-node-qxkgl" podUID="f69f6551-7509-4bf1-a40a-1170e25b66f0" May 15 12:53:54.979972 kubelet[2701]: I0515 12:53:54.979917 2701 kubelet.go:2306] "Pod admission denied" podUID="9d86aeaa-26cf-4666-b7ef-74cce12815a3" pod="tigera-operator/tigera-operator-6f6897fdc5-bg4bb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:55.091135 kubelet[2701]: I0515 12:53:55.091089 2701 kubelet.go:2306] "Pod admission denied" podUID="462e4030-394a-4140-8678-4918b75b4a90" pod="tigera-operator/tigera-operator-6f6897fdc5-24z5m" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:55.179120 kubelet[2701]: I0515 12:53:55.179071 2701 kubelet.go:2306] "Pod admission denied" podUID="93cf2a02-9ce5-43d6-b80e-2711755226fb" pod="tigera-operator/tigera-operator-6f6897fdc5-wqg6j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:55.276485 kubelet[2701]: I0515 12:53:55.275830 2701 kubelet.go:2306] "Pod admission denied" podUID="8f67d936-27cd-4cc8-85ab-02b596a23312" pod="tigera-operator/tigera-operator-6f6897fdc5-4bw8r" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:55.377601 kubelet[2701]: I0515 12:53:55.377480 2701 kubelet.go:2306] "Pod admission denied" podUID="9b0c6841-10dc-4c59-9c40-e30ee4899fd5" pod="tigera-operator/tigera-operator-6f6897fdc5-g8nth" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:55.480733 kubelet[2701]: I0515 12:53:55.480667 2701 kubelet.go:2306] "Pod admission denied" podUID="0d5b0a51-0829-4b0e-ae23-cdc6f6cad212" pod="tigera-operator/tigera-operator-6f6897fdc5-47z67" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:55.575212 kubelet[2701]: I0515 12:53:55.575163 2701 kubelet.go:2306] "Pod admission denied" podUID="d956cdc8-bd92-4cb5-8034-49a7b04d2b34" pod="tigera-operator/tigera-operator-6f6897fdc5-fjgzr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:55.780543 kubelet[2701]: I0515 12:53:55.780423 2701 kubelet.go:2306] "Pod admission denied" podUID="ea3d27c1-3ed8-4ae2-8ba7-f52c4fb8dcb8" pod="tigera-operator/tigera-operator-6f6897fdc5-qv7q7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:55.876552 kubelet[2701]: I0515 12:53:55.876507 2701 kubelet.go:2306] "Pod admission denied" podUID="2a4570ae-d018-4337-932f-726b40133f19" pod="tigera-operator/tigera-operator-6f6897fdc5-svx9f" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:55.973795 kubelet[2701]: I0515 12:53:55.973748 2701 kubelet.go:2306] "Pod admission denied" podUID="43ceeb39-d3b5-412e-899d-21f450a9907d" pod="tigera-operator/tigera-operator-6f6897fdc5-9z2m2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:56.079168 kubelet[2701]: I0515 12:53:56.079137 2701 kubelet.go:2306] "Pod admission denied" podUID="0a79f0a3-fd15-4aac-b38a-7912c438f8ee" pod="tigera-operator/tigera-operator-6f6897fdc5-jqqpm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:56.175801 kubelet[2701]: I0515 12:53:56.175748 2701 kubelet.go:2306] "Pod admission denied" podUID="9f4ebdc4-fc50-42b8-8a00-3bb418743904" pod="tigera-operator/tigera-operator-6f6897fdc5-nvb6n" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:56.274530 kubelet[2701]: I0515 12:53:56.274487 2701 kubelet.go:2306] "Pod admission denied" podUID="cf8601d8-d844-459f-9e13-09ab2b17149c" pod="tigera-operator/tigera-operator-6f6897fdc5-8ptwh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:56.373418 kubelet[2701]: I0515 12:53:56.373110 2701 kubelet.go:2306] "Pod admission denied" podUID="8b83e134-5ce3-4bca-b795-fcb43d5ae59c" pod="tigera-operator/tigera-operator-6f6897fdc5-sw5kn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:56.408092 kubelet[2701]: E0515 12:53:56.407129 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:56.408398 kubelet[2701]: E0515 12:53:56.408194 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:56.408617 containerd[1536]: time="2025-05-15T12:53:56.408589551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:53:56.409669 containerd[1536]: time="2025-05-15T12:53:56.408768881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:53:56.490940 kubelet[2701]: I0515 12:53:56.490896 2701 kubelet.go:2306] "Pod admission denied" podUID="59029f17-672b-43d5-9855-6165e31060a8" pod="tigera-operator/tigera-operator-6f6897fdc5-q5hk7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:56.494666 containerd[1536]: time="2025-05-15T12:53:56.494570803Z" level=error msg="Failed to destroy network for sandbox \"0a14a4bf10e3a0d65610c737d276b2cd7b89203ebab2bd89ea86700237b01f5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:56.499309 containerd[1536]: time="2025-05-15T12:53:56.498326772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a14a4bf10e3a0d65610c737d276b2cd7b89203ebab2bd89ea86700237b01f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:56.498813 systemd[1]: run-netns-cni\x2dfd225e30\x2dcb43\x2d9cf9\x2d95f9\x2d28f6e4bc3f62.mount: Deactivated successfully. May 15 12:53:56.499944 kubelet[2701]: E0515 12:53:56.499709 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a14a4bf10e3a0d65610c737d276b2cd7b89203ebab2bd89ea86700237b01f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:56.499944 kubelet[2701]: E0515 12:53:56.499754 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a14a4bf10e3a0d65610c737d276b2cd7b89203ebab2bd89ea86700237b01f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:56.499944 kubelet[2701]: E0515 12:53:56.499786 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a14a4bf10e3a0d65610c737d276b2cd7b89203ebab2bd89ea86700237b01f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:53:56.499944 kubelet[2701]: E0515 12:53:56.499823 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a14a4bf10e3a0d65610c737d276b2cd7b89203ebab2bd89ea86700237b01f5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:53:56.515258 containerd[1536]: time="2025-05-15T12:53:56.513951671Z" level=error msg="Failed to destroy network for sandbox \"29e8e569cdeb4769689eac91524d71d48a5015a805bde3949e342d44fe058d3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:56.516220 systemd[1]: run-netns-cni\x2d2ad47fdf\x2d45ff\x2d9c22\x2d6d4c\x2da198c47909b3.mount: Deactivated successfully. May 15 12:53:56.518928 containerd[1536]: time="2025-05-15T12:53:56.518654182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29e8e569cdeb4769689eac91524d71d48a5015a805bde3949e342d44fe058d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:56.522791 kubelet[2701]: E0515 12:53:56.522545 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29e8e569cdeb4769689eac91524d71d48a5015a805bde3949e342d44fe058d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:56.522791 kubelet[2701]: E0515 12:53:56.522591 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29e8e569cdeb4769689eac91524d71d48a5015a805bde3949e342d44fe058d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:56.522791 kubelet[2701]: E0515 12:53:56.522608 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29e8e569cdeb4769689eac91524d71d48a5015a805bde3949e342d44fe058d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:53:56.522791 kubelet[2701]: E0515 12:53:56.522649 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29e8e569cdeb4769689eac91524d71d48a5015a805bde3949e342d44fe058d3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfrjd" podUID="be5ba310-2fcd-4763-b9cf-8dba85ce0f76" May 15 12:53:56.575199 kubelet[2701]: I0515 12:53:56.575152 2701 kubelet.go:2306] "Pod admission denied" podUID="f07833f6-15b8-4c08-8aaf-7a66e05f523c" pod="tigera-operator/tigera-operator-6f6897fdc5-wqkdx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:56.777221 kubelet[2701]: I0515 12:53:56.776606 2701 kubelet.go:2306] "Pod admission denied" podUID="deb8a0a0-f012-49ea-9bf4-8075d5b8395b" pod="tigera-operator/tigera-operator-6f6897fdc5-9gc8s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:56.874223 kubelet[2701]: I0515 12:53:56.874170 2701 kubelet.go:2306] "Pod admission denied" podUID="0b184564-a78b-41fe-9afa-c81b55b61c62" pod="tigera-operator/tigera-operator-6f6897fdc5-krz6f" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:56.975647 kubelet[2701]: I0515 12:53:56.975609 2701 kubelet.go:2306] "Pod admission denied" podUID="295fbdf6-8c2c-42c6-95c6-72527d46d4d1" pod="tigera-operator/tigera-operator-6f6897fdc5-rcg4l" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:57.075060 kubelet[2701]: I0515 12:53:57.075009 2701 kubelet.go:2306] "Pod admission denied" podUID="6101022a-760f-4033-a8cb-bba377c7e3f0" pod="tigera-operator/tigera-operator-6f6897fdc5-hv5w7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:57.187075 kubelet[2701]: I0515 12:53:57.187029 2701 kubelet.go:2306] "Pod admission denied" podUID="735258b4-290b-4434-bb10-b351a9b4af12" pod="tigera-operator/tigera-operator-6f6897fdc5-vdbss" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:57.273422 kubelet[2701]: I0515 12:53:57.273376 2701 kubelet.go:2306] "Pod admission denied" podUID="03a7f073-0cd1-4b07-b801-cad0469c5947" pod="tigera-operator/tigera-operator-6f6897fdc5-tg5x2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:57.376407 kubelet[2701]: I0515 12:53:57.375917 2701 kubelet.go:2306] "Pod admission denied" podUID="359d0194-c658-4678-96e8-3cf63097ec5d" pod="tigera-operator/tigera-operator-6f6897fdc5-gmqx8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:57.406881 kubelet[2701]: E0515 12:53:57.406846 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:53:57.407629 containerd[1536]: time="2025-05-15T12:53:57.407554340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:53:57.470390 containerd[1536]: time="2025-05-15T12:53:57.470217281Z" level=error msg="Failed to destroy network for sandbox \"482adbb0a490261984bb41115f9d075358b1c3731da0b13649df261dbb1f31b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:57.473326 containerd[1536]: time="2025-05-15T12:53:57.473210538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"482adbb0a490261984bb41115f9d075358b1c3731da0b13649df261dbb1f31b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:57.473420 systemd[1]: run-netns-cni\x2d6a117ce4\x2d48f1\x2d36e0\x2d3a1a\x2d1935eea2a761.mount: Deactivated successfully. May 15 12:53:57.474838 kubelet[2701]: E0515 12:53:57.473602 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482adbb0a490261984bb41115f9d075358b1c3731da0b13649df261dbb1f31b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:57.474838 kubelet[2701]: E0515 12:53:57.473657 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482adbb0a490261984bb41115f9d075358b1c3731da0b13649df261dbb1f31b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:57.474838 kubelet[2701]: E0515 12:53:57.473680 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482adbb0a490261984bb41115f9d075358b1c3731da0b13649df261dbb1f31b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:53:57.474838 kubelet[2701]: E0515 12:53:57.473728 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"482adbb0a490261984bb41115f9d075358b1c3731da0b13649df261dbb1f31b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-svgwx" podUID="a44ca807-6ed0-447f-9c4f-1a10e61b025b" May 15 12:53:57.576252 kubelet[2701]: I0515 12:53:57.576188 2701 kubelet.go:2306] "Pod admission denied" podUID="754a03b1-0e9b-43ac-865e-35488b69f6a5" pod="tigera-operator/tigera-operator-6f6897fdc5-gvvmm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:57.676375 kubelet[2701]: I0515 12:53:57.675896 2701 kubelet.go:2306] "Pod admission denied" podUID="29738f78-4829-433f-a0a2-c2475180ba02" pod="tigera-operator/tigera-operator-6f6897fdc5-k2mxw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:57.779999 kubelet[2701]: I0515 12:53:57.779945 2701 kubelet.go:2306] "Pod admission denied" podUID="867ede86-ea00-45df-ad4c-e60a29a4f829" pod="tigera-operator/tigera-operator-6f6897fdc5-zxb2s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:57.875639 kubelet[2701]: I0515 12:53:57.875582 2701 kubelet.go:2306] "Pod admission denied" podUID="faaefe38-3df0-4f34-a2ec-61da18aca789" pod="tigera-operator/tigera-operator-6f6897fdc5-gfqhs" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:57.933769 kubelet[2701]: I0515 12:53:57.933296 2701 kubelet.go:2306] "Pod admission denied" podUID="03ca94ec-cb48-4ba5-b571-f80474347f3a" pod="tigera-operator/tigera-operator-6f6897fdc5-d4rj4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:58.026527 kubelet[2701]: I0515 12:53:58.026470 2701 kubelet.go:2306] "Pod admission denied" podUID="4dd2a185-bb2d-4cb6-ad41-a775b3bef41c" pod="tigera-operator/tigera-operator-6f6897fdc5-sjhmt" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:58.125188 kubelet[2701]: I0515 12:53:58.125140 2701 kubelet.go:2306] "Pod admission denied" podUID="3fd0f18f-3d69-4f2e-b999-acffe0dc1035" pod="tigera-operator/tigera-operator-6f6897fdc5-p4mzq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:58.228527 kubelet[2701]: I0515 12:53:58.228129 2701 kubelet.go:2306] "Pod admission denied" podUID="e5c99115-57c3-4349-9f64-542348fa171d" pod="tigera-operator/tigera-operator-6f6897fdc5-9mp9n" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:58.327559 kubelet[2701]: I0515 12:53:58.327504 2701 kubelet.go:2306] "Pod admission denied" podUID="2e1ac67e-7c8c-496d-8bc2-024b0682dc99" pod="tigera-operator/tigera-operator-6f6897fdc5-nln6t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:58.430823 kubelet[2701]: I0515 12:53:58.430557 2701 kubelet.go:2306] "Pod admission denied" podUID="7ba95243-7803-4840-9380-b30871eda46c" pod="tigera-operator/tigera-operator-6f6897fdc5-rztw8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:58.526740 kubelet[2701]: I0515 12:53:58.526445 2701 kubelet.go:2306] "Pod admission denied" podUID="80f93c85-f469-4ef2-acb7-3a86388dbcb8" pod="tigera-operator/tigera-operator-6f6897fdc5-2n65p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:58.578688 kubelet[2701]: I0515 12:53:58.578635 2701 kubelet.go:2306] "Pod admission denied" podUID="41be7936-1425-4b25-ae54-93374784d0d2" pod="tigera-operator/tigera-operator-6f6897fdc5-w45r2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:58.676325 kubelet[2701]: I0515 12:53:58.676257 2701 kubelet.go:2306] "Pod admission denied" podUID="3203e459-8025-44cd-97b3-c7ca35aaa8db" pod="tigera-operator/tigera-operator-6f6897fdc5-qzmpm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:58.876934 kubelet[2701]: I0515 12:53:58.876687 2701 kubelet.go:2306] "Pod admission denied" podUID="92db7abe-647e-4240-ba48-87c94cb712d5" pod="tigera-operator/tigera-operator-6f6897fdc5-6z942" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:58.979298 kubelet[2701]: I0515 12:53:58.979249 2701 kubelet.go:2306] "Pod admission denied" podUID="9d4414ad-8963-4d1f-b079-21b843b9c94e" pod="tigera-operator/tigera-operator-6f6897fdc5-c6k89" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:59.099926 kubelet[2701]: I0515 12:53:59.099839 2701 kubelet.go:2306] "Pod admission denied" podUID="91276499-bea0-4656-a67f-53f63564968d" pod="tigera-operator/tigera-operator-6f6897fdc5-tgnbr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:59.183374 kubelet[2701]: I0515 12:53:59.183210 2701 kubelet.go:2306] "Pod admission denied" podUID="2c0c55db-8f4e-4ce2-ae0e-ce781019ce10" pod="tigera-operator/tigera-operator-6f6897fdc5-kh4dx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:59.282303 kubelet[2701]: I0515 12:53:59.282221 2701 kubelet.go:2306] "Pod admission denied" podUID="acd29587-2940-49f4-8f42-797257cea36b" pod="tigera-operator/tigera-operator-6f6897fdc5-62g56" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:59.379659 kubelet[2701]: I0515 12:53:59.379584 2701 kubelet.go:2306] "Pod admission denied" podUID="6b234299-ae70-4717-a484-eb1a63317c80" pod="tigera-operator/tigera-operator-6f6897fdc5-jtn5f" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:59.408758 containerd[1536]: time="2025-05-15T12:53:59.408664851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:53:59.484126 kubelet[2701]: I0515 12:53:59.483212 2701 kubelet.go:2306] "Pod admission denied" podUID="690eef58-ad10-447b-94d6-ebcd88054715" pod="tigera-operator/tigera-operator-6f6897fdc5-f82vw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:59.487278 containerd[1536]: time="2025-05-15T12:53:59.487231743Z" level=error msg="Failed to destroy network for sandbox \"71454d875bb4f77c914e5b8abd1793725ae8a10231f63d35808f9b87d1ea0416\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:59.491160 systemd[1]: run-netns-cni\x2dc3a5221a\x2d4d49\x2d3c64\x2d360c\x2d34b1412e0662.mount: Deactivated successfully. May 15 12:53:59.504837 containerd[1536]: time="2025-05-15T12:53:59.504527844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71454d875bb4f77c914e5b8abd1793725ae8a10231f63d35808f9b87d1ea0416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:59.505329 kubelet[2701]: E0515 12:53:59.505118 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71454d875bb4f77c914e5b8abd1793725ae8a10231f63d35808f9b87d1ea0416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:59.505329 kubelet[2701]: E0515 12:53:59.505186 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71454d875bb4f77c914e5b8abd1793725ae8a10231f63d35808f9b87d1ea0416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:59.505329 kubelet[2701]: E0515 12:53:59.505209 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71454d875bb4f77c914e5b8abd1793725ae8a10231f63d35808f9b87d1ea0416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:53:59.505329 kubelet[2701]: E0515 12:53:59.505269 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71454d875bb4f77c914e5b8abd1793725ae8a10231f63d35808f9b87d1ea0416\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:53:59.728380 kubelet[2701]: I0515 12:53:59.728319 2701 kubelet.go:2306] "Pod admission denied" podUID="945f4c5e-d4fb-4f11-9880-c66229ae6d97" pod="tigera-operator/tigera-operator-6f6897fdc5-zfnsb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:59.831940 kubelet[2701]: I0515 12:53:59.831869 2701 kubelet.go:2306] "Pod admission denied" podUID="926f2d9c-9f3a-4908-878e-e11da5527e8f" pod="tigera-operator/tigera-operator-6f6897fdc5-rwzwb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:53:59.931086 kubelet[2701]: I0515 12:53:59.931017 2701 kubelet.go:2306] "Pod admission denied" podUID="e7a9b72e-4e3d-4c24-a24e-941e701b9d0b" pod="tigera-operator/tigera-operator-6f6897fdc5-tbk69" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.030904 kubelet[2701]: I0515 12:54:00.030840 2701 kubelet.go:2306] "Pod admission denied" podUID="8e2e3656-7fc3-49c6-b79f-e3fbbb43191c" pod="tigera-operator/tigera-operator-6f6897fdc5-9hx6b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.137235 kubelet[2701]: I0515 12:54:00.137074 2701 kubelet.go:2306] "Pod admission denied" podUID="672141ea-2d80-469f-b522-5ed2117a5396" pod="tigera-operator/tigera-operator-6f6897fdc5-tgrf6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.212841 kubelet[2701]: I0515 12:54:00.212777 2701 kubelet.go:2306] "Pod admission denied" podUID="c4fe3294-8a94-431c-9fe6-438090f797fa" pod="tigera-operator/tigera-operator-6f6897fdc5-srsln" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.337491 kubelet[2701]: I0515 12:54:00.336840 2701 kubelet.go:2306] "Pod admission denied" podUID="2216bd89-f58c-4560-9b79-f9c59eced8c2" pod="tigera-operator/tigera-operator-6f6897fdc5-lkskw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.433137 kubelet[2701]: I0515 12:54:00.432790 2701 kubelet.go:2306] "Pod admission denied" podUID="e986c6c2-458a-496b-8db1-62fac0289de1" pod="tigera-operator/tigera-operator-6f6897fdc5-968d4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.534762 kubelet[2701]: I0515 12:54:00.534690 2701 kubelet.go:2306] "Pod admission denied" podUID="eff5c802-e2b5-4823-9b8f-346d1af7912f" pod="tigera-operator/tigera-operator-6f6897fdc5-s2hq6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.582419 kubelet[2701]: I0515 12:54:00.582351 2701 kubelet.go:2306] "Pod admission denied" podUID="8e034198-74b6-41b0-babe-0051caf355e1" pod="tigera-operator/tigera-operator-6f6897fdc5-z4t2t" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.683092 kubelet[2701]: I0515 12:54:00.683013 2701 kubelet.go:2306] "Pod admission denied" podUID="1b98f9da-6e65-4d64-835c-64f2f37dadbe" pod="tigera-operator/tigera-operator-6f6897fdc5-v54mv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.784225 kubelet[2701]: I0515 12:54:00.784062 2701 kubelet.go:2306] "Pod admission denied" podUID="9d28bc20-0bdc-4e12-b826-1ce771bff743" pod="tigera-operator/tigera-operator-6f6897fdc5-zlspw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.830561 kubelet[2701]: I0515 12:54:00.830493 2701 kubelet.go:2306] "Pod admission denied" podUID="46e2f7b7-0eda-489e-9ac7-5d0e602febd0" pod="tigera-operator/tigera-operator-6f6897fdc5-khrq4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:00.935145 kubelet[2701]: I0515 12:54:00.935058 2701 kubelet.go:2306] "Pod admission denied" podUID="0d4bce34-c1b7-4808-b290-708c112dc295" pod="tigera-operator/tigera-operator-6f6897fdc5-6hmhh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.031922 kubelet[2701]: I0515 12:54:01.031853 2701 kubelet.go:2306] "Pod admission denied" podUID="babb06ab-df37-4d89-8a71-050867ec2259" pod="tigera-operator/tigera-operator-6f6897fdc5-87bqr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.132402 kubelet[2701]: I0515 12:54:01.132343 2701 kubelet.go:2306] "Pod admission denied" podUID="775fd1f9-476b-4ec8-82b2-e523889f8809" pod="tigera-operator/tigera-operator-6f6897fdc5-5ldfm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.233686 kubelet[2701]: I0515 12:54:01.233615 2701 kubelet.go:2306] "Pod admission denied" podUID="6cfa1326-922d-4ed1-a351-ceae7f533f60" pod="tigera-operator/tigera-operator-6f6897fdc5-zzcb8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.287266 kubelet[2701]: I0515 12:54:01.287190 2701 kubelet.go:2306] "Pod admission denied" podUID="f93493f6-2e59-4c88-a3ad-7ae39f247fd4" pod="tigera-operator/tigera-operator-6f6897fdc5-mg47b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.381992 kubelet[2701]: I0515 12:54:01.381912 2701 kubelet.go:2306] "Pod admission denied" podUID="f8f1a819-5b09-4953-a5a5-9883eba07487" pod="tigera-operator/tigera-operator-6f6897fdc5-hzq7q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.478225 kubelet[2701]: I0515 12:54:01.478059 2701 kubelet.go:2306] "Pod admission denied" podUID="a0e2c6f6-aef7-430d-816c-0148becca670" pod="tigera-operator/tigera-operator-6f6897fdc5-5b728" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.589489 kubelet[2701]: I0515 12:54:01.588842 2701 kubelet.go:2306] "Pod admission denied" podUID="d8192ac9-9fdc-497f-b816-c746953321a2" pod="tigera-operator/tigera-operator-6f6897fdc5-pfl65" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.683193 kubelet[2701]: I0515 12:54:01.683122 2701 kubelet.go:2306] "Pod admission denied" podUID="407989a2-072c-413b-a948-9f11daaf6fb6" pod="tigera-operator/tigera-operator-6f6897fdc5-stz67" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.731650 kubelet[2701]: I0515 12:54:01.731308 2701 kubelet.go:2306] "Pod admission denied" podUID="52331f2a-e00e-47ca-bfbf-2b8107164317" pod="tigera-operator/tigera-operator-6f6897fdc5-wgfqc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.831475 kubelet[2701]: I0515 12:54:01.831391 2701 kubelet.go:2306] "Pod admission denied" podUID="8fcbec59-d684-41c1-a374-1d1ebaab480b" pod="tigera-operator/tigera-operator-6f6897fdc5-k7tjw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:01.938338 kubelet[2701]: I0515 12:54:01.938277 2701 kubelet.go:2306] "Pod admission denied" podUID="d54ecc28-7771-44bc-9108-103186f92aaa" pod="tigera-operator/tigera-operator-6f6897fdc5-p2l6c" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:02.035650 kubelet[2701]: I0515 12:54:02.034272 2701 kubelet.go:2306] "Pod admission denied" podUID="f390da23-8f62-4398-bdae-eb171099bee4" pod="tigera-operator/tigera-operator-6f6897fdc5-rktwl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:02.089572 kubelet[2701]: I0515 12:54:02.089510 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:54:02.089572 kubelet[2701]: I0515 12:54:02.089554 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:54:02.091838 kubelet[2701]: I0515 12:54:02.091805 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:54:02.106977 kubelet[2701]: I0515 12:54:02.106938 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:54:02.107089 kubelet[2701]: I0515 12:54:02.107048 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-svgwx","calico-system/calico-kube-controllers-6cff7bc5b6-sff94","kube-system/coredns-6f6b679f8f-hfrjd","calico-system/calico-node-qxkgl","calico-system/csi-node-driver-d7hrh","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:54:02.107171 kubelet[2701]: E0515 12:54:02.107092 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:02.107171 kubelet[2701]: E0515 12:54:02.107104 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:02.107171 kubelet[2701]: E0515 12:54:02.107111 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:02.107171 kubelet[2701]: E0515 12:54:02.107119 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:54:02.107171 kubelet[2701]: E0515 12:54:02.107127 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:02.107171 kubelet[2701]: E0515 12:54:02.107140 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:54:02.107171 kubelet[2701]: E0515 12:54:02.107151 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:54:02.107171 kubelet[2701]: E0515 12:54:02.107162 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhz8r" May 15 12:54:02.107171 kubelet[2701]: E0515 12:54:02.107172 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:54:02.107171 kubelet[2701]: E0515 12:54:02.107183 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:54:02.107392 kubelet[2701]: I0515 12:54:02.107194 2701 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 12:54:02.133829 kubelet[2701]: I0515 12:54:02.133563 2701 kubelet.go:2306] "Pod admission denied" podUID="d928f51f-c22e-4cd9-8d91-8f21034d97dd" pod="tigera-operator/tigera-operator-6f6897fdc5-wn7sn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:02.246471 kubelet[2701]: I0515 12:54:02.246386 2701 kubelet.go:2306] "Pod admission denied" podUID="9949b901-30cb-434c-bffb-f75760999308" pod="tigera-operator/tigera-operator-6f6897fdc5-pd99j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:02.434530 kubelet[2701]: I0515 12:54:02.434437 2701 kubelet.go:2306] "Pod admission denied" podUID="cb336354-5133-4afc-865c-319bb1716e90" pod="tigera-operator/tigera-operator-6f6897fdc5-t5q62" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:02.533348 kubelet[2701]: I0515 12:54:02.533278 2701 kubelet.go:2306] "Pod admission denied" podUID="77b2cdaf-ca85-40f8-99a8-c50a4ec1978a" pod="tigera-operator/tigera-operator-6f6897fdc5-7krcl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:02.579865 kubelet[2701]: I0515 12:54:02.579811 2701 kubelet.go:2306] "Pod admission denied" podUID="d434e275-6a91-4cde-bafc-fbb7fb0cb1cb" pod="tigera-operator/tigera-operator-6f6897fdc5-zrcjq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:02.684711 kubelet[2701]: I0515 12:54:02.684530 2701 kubelet.go:2306] "Pod admission denied" podUID="d6a535ab-22f6-4922-8a00-b8968dbc51ea" pod="tigera-operator/tigera-operator-6f6897fdc5-srt85" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:02.880240 kubelet[2701]: I0515 12:54:02.880162 2701 kubelet.go:2306] "Pod admission denied" podUID="b68b6a0d-d7e5-4c4e-ae9a-14cc319d1c7f" pod="tigera-operator/tigera-operator-6f6897fdc5-r5lhw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:02.984645 kubelet[2701]: I0515 12:54:02.983973 2701 kubelet.go:2306] "Pod admission denied" podUID="741eda7e-4e66-4086-a7c9-8ae2a57303ee" pod="tigera-operator/tigera-operator-6f6897fdc5-qkkrn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:03.042852 kubelet[2701]: I0515 12:54:03.042592 2701 kubelet.go:2306] "Pod admission denied" podUID="a8efc778-0504-4abd-8df2-5e0d3ce9c422" pod="tigera-operator/tigera-operator-6f6897fdc5-9v8fd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:03.139101 kubelet[2701]: I0515 12:54:03.139028 2701 kubelet.go:2306] "Pod admission denied" podUID="e93ebe5d-2ed5-4d39-9923-0d28586eaca7" pod="tigera-operator/tigera-operator-6f6897fdc5-8rr29" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:03.335755 kubelet[2701]: I0515 12:54:03.335688 2701 kubelet.go:2306] "Pod admission denied" podUID="6971374b-6b98-4fe2-a370-a30145cce96a" pod="tigera-operator/tigera-operator-6f6897fdc5-z7nsz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:03.452504 kubelet[2701]: I0515 12:54:03.452329 2701 kubelet.go:2306] "Pod admission denied" podUID="161cb999-243a-48c5-aa10-d49157be3301" pod="tigera-operator/tigera-operator-6f6897fdc5-frtzr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:03.487889 kubelet[2701]: I0515 12:54:03.487826 2701 kubelet.go:2306] "Pod admission denied" podUID="dbed5d77-52aa-4455-afb1-243f0340cebc" pod="tigera-operator/tigera-operator-6f6897fdc5-nvxn5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:03.580219 kubelet[2701]: I0515 12:54:03.580146 2701 kubelet.go:2306] "Pod admission denied" podUID="c623a1a1-1e9c-4b29-ac91-c0f987ef650b" pod="tigera-operator/tigera-operator-6f6897fdc5-gv5jq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:03.684208 kubelet[2701]: I0515 12:54:03.684034 2701 kubelet.go:2306] "Pod admission denied" podUID="6126b743-36f7-43fa-940a-ef88255764a6" pod="tigera-operator/tigera-operator-6f6897fdc5-mffv5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:03.779385 kubelet[2701]: I0515 12:54:03.779297 2701 kubelet.go:2306] "Pod admission denied" podUID="ca018be7-625b-40eb-95b9-2529f7ab243d" pod="tigera-operator/tigera-operator-6f6897fdc5-8m7g7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:03.979053 kubelet[2701]: I0515 12:54:03.978936 2701 kubelet.go:2306] "Pod admission denied" podUID="e5fb1af1-cf45-4491-b768-be04668c68fc" pod="tigera-operator/tigera-operator-6f6897fdc5-np2kw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:04.076189 kubelet[2701]: I0515 12:54:04.076150 2701 kubelet.go:2306] "Pod admission denied" podUID="ddc87c83-9e00-4367-a203-2dced5b93bb6" pod="tigera-operator/tigera-operator-6f6897fdc5-gf6s8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:04.125247 kubelet[2701]: I0515 12:54:04.125202 2701 kubelet.go:2306] "Pod admission denied" podUID="675a57ca-4bff-4824-ac87-456871d132e5" pod="tigera-operator/tigera-operator-6f6897fdc5-5hrh9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:04.228036 kubelet[2701]: I0515 12:54:04.227982 2701 kubelet.go:2306] "Pod admission denied" podUID="89d0ffdc-1106-4a8b-9f6b-0e2baa8cdc14" pod="tigera-operator/tigera-operator-6f6897fdc5-nnxw7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:04.328687 kubelet[2701]: I0515 12:54:04.328647 2701 kubelet.go:2306] "Pod admission denied" podUID="7286e7b8-b7c3-4c1a-8199-b9db02c8d556" pod="tigera-operator/tigera-operator-6f6897fdc5-q9lcz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:04.429770 kubelet[2701]: I0515 12:54:04.429718 2701 kubelet.go:2306] "Pod admission denied" podUID="0fdc07c6-77ba-4986-92e8-4718ef6b3abc" pod="tigera-operator/tigera-operator-6f6897fdc5-4mm82" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:04.626443 kubelet[2701]: I0515 12:54:04.625964 2701 kubelet.go:2306] "Pod admission denied" podUID="7f87c977-e589-41c6-b9d7-c0f1a1d9790d" pod="tigera-operator/tigera-operator-6f6897fdc5-68g75" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:04.732013 kubelet[2701]: I0515 12:54:04.731772 2701 kubelet.go:2306] "Pod admission denied" podUID="202d7efb-0980-48ee-9239-fd551595adb4" pod="tigera-operator/tigera-operator-6f6897fdc5-ghsv6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:04.826916 kubelet[2701]: I0515 12:54:04.826859 2701 kubelet.go:2306] "Pod admission denied" podUID="a88e4d27-dbdc-459a-aa7f-3dea1241369e" pod="tigera-operator/tigera-operator-6f6897fdc5-2dnjl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:04.930192 kubelet[2701]: I0515 12:54:04.929647 2701 kubelet.go:2306] "Pod admission denied" podUID="10f08329-a957-46ef-b616-cee419b87988" pod="tigera-operator/tigera-operator-6f6897fdc5-vgswq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:04.975709 kubelet[2701]: I0515 12:54:04.975658 2701 kubelet.go:2306] "Pod admission denied" podUID="10900a19-164a-42c3-833c-6a742c7e830c" pod="tigera-operator/tigera-operator-6f6897fdc5-wx69b" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:05.078908 kubelet[2701]: I0515 12:54:05.078858 2701 kubelet.go:2306] "Pod admission denied" podUID="4786b523-c495-4f3f-aa12-320a64285942" pod="tigera-operator/tigera-operator-6f6897fdc5-79jcq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:05.176035 kubelet[2701]: I0515 12:54:05.175977 2701 kubelet.go:2306] "Pod admission denied" podUID="77c24ea1-b56b-496a-9e9e-760916591400" pod="tigera-operator/tigera-operator-6f6897fdc5-fq957" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:05.285141 kubelet[2701]: I0515 12:54:05.284556 2701 kubelet.go:2306] "Pod admission denied" podUID="d5aa0102-5d02-4562-97c1-24b4947a6d61" pod="tigera-operator/tigera-operator-6f6897fdc5-lmbt4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:05.375786 kubelet[2701]: I0515 12:54:05.375739 2701 kubelet.go:2306] "Pod admission denied" podUID="a3047bc4-9f0b-4373-bade-3baeec588593" pod="tigera-operator/tigera-operator-6f6897fdc5-x9fvj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:05.478418 kubelet[2701]: I0515 12:54:05.478368 2701 kubelet.go:2306] "Pod admission denied" podUID="6f18b662-8a81-4e33-bcbb-16c76ba645d5" pod="tigera-operator/tigera-operator-6f6897fdc5-vml9f" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:05.676980 kubelet[2701]: I0515 12:54:05.676930 2701 kubelet.go:2306] "Pod admission denied" podUID="041d83e1-b5fb-4fc1-a2bc-1cbaeb142b89" pod="tigera-operator/tigera-operator-6f6897fdc5-cmwqg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:05.778275 kubelet[2701]: I0515 12:54:05.778222 2701 kubelet.go:2306] "Pod admission denied" podUID="02f0f336-a54c-4d7b-9f9a-44c26a5c6cf5" pod="tigera-operator/tigera-operator-6f6897fdc5-48lpq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:05.877747 kubelet[2701]: I0515 12:54:05.877700 2701 kubelet.go:2306] "Pod admission denied" podUID="732e0f11-e70a-4c02-9395-c7533edec328" pod="tigera-operator/tigera-operator-6f6897fdc5-xzl76" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:05.975805 kubelet[2701]: I0515 12:54:05.975680 2701 kubelet.go:2306] "Pod admission denied" podUID="3dd416b7-fa6f-4be1-9269-def6d4afd214" pod="tigera-operator/tigera-operator-6f6897fdc5-8j2wm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:06.081099 kubelet[2701]: I0515 12:54:06.081050 2701 kubelet.go:2306] "Pod admission denied" podUID="10c3b189-55a8-4ed3-b19c-e63579124652" pod="tigera-operator/tigera-operator-6f6897fdc5-nqfpb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:06.278771 kubelet[2701]: I0515 12:54:06.278437 2701 kubelet.go:2306] "Pod admission denied" podUID="1f83a642-cfb4-4f2d-9bc6-af9d79af4a75" pod="tigera-operator/tigera-operator-6f6897fdc5-jcv4j" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:06.377789 kubelet[2701]: I0515 12:54:06.377742 2701 kubelet.go:2306] "Pod admission denied" podUID="1cbc8bca-49d1-42f4-ac93-599371cf35b6" pod="tigera-operator/tigera-operator-6f6897fdc5-msx84" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:06.428068 kubelet[2701]: I0515 12:54:06.428015 2701 kubelet.go:2306] "Pod admission denied" podUID="d13a1c80-ed20-43ae-8d14-8e8f158a9f73" pod="tigera-operator/tigera-operator-6f6897fdc5-4jrwx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:06.526582 kubelet[2701]: I0515 12:54:06.526529 2701 kubelet.go:2306] "Pod admission denied" podUID="eb511230-a7b9-4af3-92a6-0b5cb4f09480" pod="tigera-operator/tigera-operator-6f6897fdc5-b7xtd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:06.627701 kubelet[2701]: I0515 12:54:06.627639 2701 kubelet.go:2306] "Pod admission denied" podUID="c2618b36-062e-4e51-ba2c-9eace527e137" pod="tigera-operator/tigera-operator-6f6897fdc5-xmtfg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:06.729293 kubelet[2701]: I0515 12:54:06.729242 2701 kubelet.go:2306] "Pod admission denied" podUID="7b7b1141-23ad-4f4e-8224-5372eb746b86" pod="tigera-operator/tigera-operator-6f6897fdc5-qz2q2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:06.827946 kubelet[2701]: I0515 12:54:06.827704 2701 kubelet.go:2306] "Pod admission denied" podUID="47270fb6-3d8c-41bd-b9bd-273510eae98b" pod="tigera-operator/tigera-operator-6f6897fdc5-7s2bc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:06.943739 kubelet[2701]: I0515 12:54:06.943566 2701 kubelet.go:2306] "Pod admission denied" podUID="ea2649bc-1a78-4483-a5b4-4431122300c4" pod="tigera-operator/tigera-operator-6f6897fdc5-jzvsv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:07.029361 kubelet[2701]: I0515 12:54:07.029309 2701 kubelet.go:2306] "Pod admission denied" podUID="ab97bf93-96e0-4458-8212-a9e8dc324536" pod="tigera-operator/tigera-operator-6f6897fdc5-4p4v5" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:07.075950 kubelet[2701]: I0515 12:54:07.075902 2701 kubelet.go:2306] "Pod admission denied" podUID="49e6f259-fc1b-4bc5-8df1-40ee81be6c79" pod="tigera-operator/tigera-operator-6f6897fdc5-zdxhr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:07.184481 kubelet[2701]: I0515 12:54:07.183954 2701 kubelet.go:2306] "Pod admission denied" podUID="16136d62-f1fc-4564-a1d9-e5c8b153e5cf" pod="tigera-operator/tigera-operator-6f6897fdc5-46jt6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:07.275338 kubelet[2701]: I0515 12:54:07.275212 2701 kubelet.go:2306] "Pod admission denied" podUID="ac3876b7-21ec-4423-a6f8-cb7187afe799" pod="tigera-operator/tigera-operator-6f6897fdc5-cxn6f" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:07.380063 kubelet[2701]: I0515 12:54:07.380016 2701 kubelet.go:2306] "Pod admission denied" podUID="f53900d4-5e0d-4c51-ad0b-0308bbe2ae77" pod="tigera-operator/tigera-operator-6f6897fdc5-sqk6s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:07.408321 containerd[1536]: time="2025-05-15T12:54:07.408264737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:54:07.494682 kubelet[2701]: I0515 12:54:07.494393 2701 kubelet.go:2306] "Pod admission denied" podUID="fedfb99c-d7ab-431c-8c2e-3ad6bb451d52" pod="tigera-operator/tigera-operator-6f6897fdc5-mhkzp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:07.513495 containerd[1536]: time="2025-05-15T12:54:07.513312717Z" level=error msg="Failed to destroy network for sandbox \"981e4c5fcc67e15f3e66295f9567ce0bcf531a5354727ec5846e7321d3dfaa7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:07.516713 containerd[1536]: time="2025-05-15T12:54:07.516555713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"981e4c5fcc67e15f3e66295f9567ce0bcf531a5354727ec5846e7321d3dfaa7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:07.517488 kubelet[2701]: E0515 12:54:07.516914 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"981e4c5fcc67e15f3e66295f9567ce0bcf531a5354727ec5846e7321d3dfaa7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:07.517488 kubelet[2701]: E0515 12:54:07.516991 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"981e4c5fcc67e15f3e66295f9567ce0bcf531a5354727ec5846e7321d3dfaa7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:07.517488 kubelet[2701]: E0515 12:54:07.517016 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"981e4c5fcc67e15f3e66295f9567ce0bcf531a5354727ec5846e7321d3dfaa7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:07.517488 kubelet[2701]: E0515 12:54:07.517053 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"981e4c5fcc67e15f3e66295f9567ce0bcf531a5354727ec5846e7321d3dfaa7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:54:07.519251 systemd[1]: run-netns-cni\x2de4604341\x2d94fc\x2d4047\x2ded67\x2da40f4f7463e7.mount: Deactivated successfully. May 15 12:54:07.587266 kubelet[2701]: I0515 12:54:07.587233 2701 kubelet.go:2306] "Pod admission denied" podUID="89cfc368-6116-4f6e-beb2-6da32ca4b8c7" pod="tigera-operator/tigera-operator-6f6897fdc5-ckmpw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:07.708982 kubelet[2701]: I0515 12:54:07.708928 2701 kubelet.go:2306] "Pod admission denied" podUID="fd369358-682f-44ac-b4dd-d438ccacbf84" pod="tigera-operator/tigera-operator-6f6897fdc5-r8jhj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:07.840766 kubelet[2701]: I0515 12:54:07.840102 2701 kubelet.go:2306] "Pod admission denied" podUID="69e2ff2b-0d08-4ca9-bcaa-5c17f8ca84d6" pod="tigera-operator/tigera-operator-6f6897fdc5-8x5nb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:07.927302 kubelet[2701]: I0515 12:54:07.927255 2701 kubelet.go:2306] "Pod admission denied" podUID="52fe711e-e997-426a-94cd-c7e0c7b88dae" pod="tigera-operator/tigera-operator-6f6897fdc5-scbc9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:08.029034 kubelet[2701]: I0515 12:54:08.028983 2701 kubelet.go:2306] "Pod admission denied" podUID="262d17e8-85ff-4558-a390-74308a06d4ac" pod="tigera-operator/tigera-operator-6f6897fdc5-n4p69" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:08.135646 kubelet[2701]: I0515 12:54:08.134177 2701 kubelet.go:2306] "Pod admission denied" podUID="749fa34c-09c0-47da-851a-c59b2e76576e" pod="tigera-operator/tigera-operator-6f6897fdc5-wkhkh" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:08.329999 kubelet[2701]: I0515 12:54:08.329941 2701 kubelet.go:2306] "Pod admission denied" podUID="6eb276a1-26fa-453e-9df7-187f7cd37003" pod="tigera-operator/tigera-operator-6f6897fdc5-l9g2q" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:08.414529 kubelet[2701]: E0515 12:54:08.412499 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:08.415058 kubelet[2701]: E0515 12:54:08.414837 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-qxkgl" podUID="f69f6551-7509-4bf1-a40a-1170e25b66f0" May 15 12:54:08.415350 kubelet[2701]: E0515 12:54:08.415333 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:08.416538 containerd[1536]: time="2025-05-15T12:54:08.416168377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:54:08.451244 kubelet[2701]: I0515 12:54:08.450784 2701 kubelet.go:2306] "Pod admission denied" podUID="6b942167-3dba-4888-a962-d671f99c4be0" pod="tigera-operator/tigera-operator-6f6897fdc5-jblgd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:08.538629 kubelet[2701]: I0515 12:54:08.538409 2701 kubelet.go:2306] "Pod admission denied" podUID="7be89961-c7df-4743-b48e-060e0be37fdd" pod="tigera-operator/tigera-operator-6f6897fdc5-96c5z" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:08.542914 containerd[1536]: time="2025-05-15T12:54:08.542858096Z" level=error msg="Failed to destroy network for sandbox \"2ba2d7dfa7acc40b01b7397f4d92949632865ed8f9ab08b28da15ee78a2de4dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:08.545038 containerd[1536]: time="2025-05-15T12:54:08.544992130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba2d7dfa7acc40b01b7397f4d92949632865ed8f9ab08b28da15ee78a2de4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:08.545471 kubelet[2701]: E0515 12:54:08.545398 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba2d7dfa7acc40b01b7397f4d92949632865ed8f9ab08b28da15ee78a2de4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:08.546484 kubelet[2701]: E0515 12:54:08.545443 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba2d7dfa7acc40b01b7397f4d92949632865ed8f9ab08b28da15ee78a2de4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:08.546484 kubelet[2701]: E0515 12:54:08.545878 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba2d7dfa7acc40b01b7397f4d92949632865ed8f9ab08b28da15ee78a2de4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:08.546484 kubelet[2701]: E0515 12:54:08.545935 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ba2d7dfa7acc40b01b7397f4d92949632865ed8f9ab08b28da15ee78a2de4dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-svgwx" podUID="a44ca807-6ed0-447f-9c4f-1a10e61b025b" May 15 12:54:08.546246 systemd[1]: run-netns-cni\x2dff44b4fe\x2dd655\x2d7142\x2d2b5f\x2db5fd2b6d4f4a.mount: Deactivated successfully. May 15 12:54:08.627673 kubelet[2701]: I0515 12:54:08.627612 2701 kubelet.go:2306] "Pod admission denied" podUID="609825ff-abef-4b45-a079-fe4a88d4f88d" pod="tigera-operator/tigera-operator-6f6897fdc5-fj5nx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:08.728184 kubelet[2701]: I0515 12:54:08.728057 2701 kubelet.go:2306] "Pod admission denied" podUID="6d70a7b9-dfa2-49d0-94a7-14136006a62b" pod="tigera-operator/tigera-operator-6f6897fdc5-ql4g9" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:08.936289 kubelet[2701]: I0515 12:54:08.936236 2701 kubelet.go:2306] "Pod admission denied" podUID="b5da8ac4-0902-41d7-8477-fb443fdb3a26" pod="tigera-operator/tigera-operator-6f6897fdc5-sl5j2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:09.030880 kubelet[2701]: I0515 12:54:09.030286 2701 kubelet.go:2306] "Pod admission denied" podUID="c29a25c2-40a3-43f9-b9cd-ef6c61dfccf2" pod="tigera-operator/tigera-operator-6f6897fdc5-8b7jg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:09.129622 kubelet[2701]: I0515 12:54:09.129577 2701 kubelet.go:2306] "Pod admission denied" podUID="40c4728b-be0c-4e6c-8e3a-64d3eb6f30dd" pod="tigera-operator/tigera-operator-6f6897fdc5-8nqxq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:09.589710 kubelet[2701]: I0515 12:54:09.589660 2701 kubelet.go:2306] "Pod admission denied" podUID="41005b38-7e0a-496c-9cf5-6e30ecbf7966" pod="tigera-operator/tigera-operator-6f6897fdc5-c2mff" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:09.621503 kubelet[2701]: I0515 12:54:09.621436 2701 kubelet.go:2306] "Pod admission denied" podUID="7af8ead2-dbb8-4907-86a1-89195721199a" pod="tigera-operator/tigera-operator-6f6897fdc5-d6kmk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:09.659581 kubelet[2701]: I0515 12:54:09.659504 2701 kubelet.go:2306] "Pod admission denied" podUID="bbb74fec-3873-4b4f-92c5-222b227c65ac" pod="tigera-operator/tigera-operator-6f6897fdc5-wqd64" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:09.724351 kubelet[2701]: I0515 12:54:09.724298 2701 kubelet.go:2306] "Pod admission denied" podUID="bbdfe549-6eec-44fe-88e1-fc6054f8f0e2" pod="tigera-operator/tigera-operator-6f6897fdc5-zqbrg" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:09.757062 kubelet[2701]: I0515 12:54:09.757006 2701 kubelet.go:2306] "Pod admission denied" podUID="f9899523-81ef-4d4b-ab86-876bbf3b7135" pod="tigera-operator/tigera-operator-6f6897fdc5-kf25x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:09.878048 kubelet[2701]: I0515 12:54:09.877608 2701 kubelet.go:2306] "Pod admission denied" podUID="ea1ff93c-23b8-4117-b7c0-7071a102f924" pod="tigera-operator/tigera-operator-6f6897fdc5-fwfz6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:09.990653 kubelet[2701]: I0515 12:54:09.990218 2701 kubelet.go:2306] "Pod admission denied" podUID="411d632a-4a32-41e5-a264-3776a52527eb" pod="tigera-operator/tigera-operator-6f6897fdc5-zmcwb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.078133 kubelet[2701]: I0515 12:54:10.078075 2701 kubelet.go:2306] "Pod admission denied" podUID="4e60fe7c-7fd6-47fd-8105-44fb190b326a" pod="tigera-operator/tigera-operator-6f6897fdc5-n7fcz" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.129804 kubelet[2701]: I0515 12:54:10.128847 2701 kubelet.go:2306] "Pod admission denied" podUID="58f519e6-c0e3-4d59-8f7e-31770f7fb8d5" pod="tigera-operator/tigera-operator-6f6897fdc5-qzkp4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.240215 kubelet[2701]: I0515 12:54:10.240163 2701 kubelet.go:2306] "Pod admission denied" podUID="ea7091a5-51e2-458d-b86d-73d728b47506" pod="tigera-operator/tigera-operator-6f6897fdc5-6nln2" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.327442 kubelet[2701]: I0515 12:54:10.327391 2701 kubelet.go:2306] "Pod admission denied" podUID="c1838b52-b388-4e69-9762-800da3ac1968" pod="tigera-operator/tigera-operator-6f6897fdc5-4vdr6" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.430436 kubelet[2701]: I0515 12:54:10.429904 2701 kubelet.go:2306] "Pod admission denied" podUID="75794b41-7e3e-4ca1-8be1-e6830eb4a7ef" pod="tigera-operator/tigera-operator-6f6897fdc5-p8nmp" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.529334 kubelet[2701]: I0515 12:54:10.529285 2701 kubelet.go:2306] "Pod admission denied" podUID="07f8620e-9998-4b10-af2c-a430b63dfab6" pod="tigera-operator/tigera-operator-6f6897fdc5-lw6mn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.579294 kubelet[2701]: I0515 12:54:10.579249 2701 kubelet.go:2306] "Pod admission denied" podUID="751e237c-a521-4e54-9194-5a4002f71019" pod="tigera-operator/tigera-operator-6f6897fdc5-srvtq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.690688 kubelet[2701]: I0515 12:54:10.689521 2701 kubelet.go:2306] "Pod admission denied" podUID="4e1b2680-d2ee-45aa-b314-6c9a47e294ef" pod="tigera-operator/tigera-operator-6f6897fdc5-prmg7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.780790 kubelet[2701]: I0515 12:54:10.780740 2701 kubelet.go:2306] "Pod admission denied" podUID="d8e9421a-dddc-4a99-839b-2d3042277c15" pod="tigera-operator/tigera-operator-6f6897fdc5-wcsdj" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.878261 kubelet[2701]: I0515 12:54:10.878217 2701 kubelet.go:2306] "Pod admission denied" podUID="a425479f-7511-464a-a1e9-5dfef2cfffde" pod="tigera-operator/tigera-operator-6f6897fdc5-x4vzn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:10.978381 kubelet[2701]: I0515 12:54:10.978243 2701 kubelet.go:2306] "Pod admission denied" podUID="b2bf64a3-9ee3-42f8-a6f5-9ced84c8c680" pod="tigera-operator/tigera-operator-6f6897fdc5-q94zm" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:11.079658 kubelet[2701]: I0515 12:54:11.079595 2701 kubelet.go:2306] "Pod admission denied" podUID="d63f973a-891f-4929-ac56-9661432db352" pod="tigera-operator/tigera-operator-6f6897fdc5-wc6mv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:11.186252 kubelet[2701]: I0515 12:54:11.186205 2701 kubelet.go:2306] "Pod admission denied" podUID="b67ac34b-b0ce-47d8-a4c0-a9281710e4de" pod="tigera-operator/tigera-operator-6f6897fdc5-7bmgv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:11.282237 kubelet[2701]: I0515 12:54:11.281721 2701 kubelet.go:2306] "Pod admission denied" podUID="72c83332-f808-4b61-b72b-b533a433be42" pod="tigera-operator/tigera-operator-6f6897fdc5-sgpqn" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:11.377742 kubelet[2701]: I0515 12:54:11.377683 2701 kubelet.go:2306] "Pod admission denied" podUID="87627fa2-ec58-4d78-8867-2e4ee433597a" pod="tigera-operator/tigera-operator-6f6897fdc5-r79kd" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:11.407673 kubelet[2701]: E0515 12:54:11.407602 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:11.408682 containerd[1536]: time="2025-05-15T12:54:11.408642707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:54:11.409881 containerd[1536]: time="2025-05-15T12:54:11.408912747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:54:11.523083 kubelet[2701]: I0515 12:54:11.522558 2701 kubelet.go:2306] "Pod admission denied" podUID="d6ec3a69-2d67-46e2-83f7-e27139380220" pod="tigera-operator/tigera-operator-6f6897fdc5-pwb9h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:11.537695 containerd[1536]: time="2025-05-15T12:54:11.537266076Z" level=error msg="Failed to destroy network for sandbox \"7c03d6cc0bb51b1d9d917ff7201a51d6a04c0eaad33d883a4c50649705b2cc57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:11.540545 containerd[1536]: time="2025-05-15T12:54:11.540507573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c03d6cc0bb51b1d9d917ff7201a51d6a04c0eaad33d883a4c50649705b2cc57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:11.545418 kubelet[2701]: E0515 12:54:11.543407 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c03d6cc0bb51b1d9d917ff7201a51d6a04c0eaad33d883a4c50649705b2cc57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:11.545418 kubelet[2701]: E0515 12:54:11.543478 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c03d6cc0bb51b1d9d917ff7201a51d6a04c0eaad33d883a4c50649705b2cc57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:11.545418 kubelet[2701]: E0515 12:54:11.543499 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c03d6cc0bb51b1d9d917ff7201a51d6a04c0eaad33d883a4c50649705b2cc57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:11.545418 kubelet[2701]: E0515 12:54:11.543550 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c03d6cc0bb51b1d9d917ff7201a51d6a04c0eaad33d883a4c50649705b2cc57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:54:11.545284 systemd[1]: run-netns-cni\x2d911cfab5\x2d5101\x2d42e2\x2de646\x2de5ed83bbfe68.mount: Deactivated successfully. May 15 12:54:11.575756 containerd[1536]: time="2025-05-15T12:54:11.575614278Z" level=error msg="Failed to destroy network for sandbox \"5f92b6f6024fbbcdc23ac5a6654fddb5fc636f46d2528dd49e9c81a1b991b880\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:11.577387 containerd[1536]: time="2025-05-15T12:54:11.577233341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f92b6f6024fbbcdc23ac5a6654fddb5fc636f46d2528dd49e9c81a1b991b880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:11.578186 kubelet[2701]: E0515 12:54:11.577789 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f92b6f6024fbbcdc23ac5a6654fddb5fc636f46d2528dd49e9c81a1b991b880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:11.578186 kubelet[2701]: E0515 12:54:11.577838 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f92b6f6024fbbcdc23ac5a6654fddb5fc636f46d2528dd49e9c81a1b991b880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:11.578186 kubelet[2701]: E0515 12:54:11.577858 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f92b6f6024fbbcdc23ac5a6654fddb5fc636f46d2528dd49e9c81a1b991b880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:11.578186 kubelet[2701]: E0515 12:54:11.577892 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f92b6f6024fbbcdc23ac5a6654fddb5fc636f46d2528dd49e9c81a1b991b880\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfrjd" podUID="be5ba310-2fcd-4763-b9cf-8dba85ce0f76" May 15 12:54:11.581054 systemd[1]: run-netns-cni\x2d2daac374\x2d7496\x2d2654\x2d5874\x2d137638697118.mount: Deactivated successfully. May 15 12:54:11.591292 kubelet[2701]: I0515 12:54:11.591250 2701 kubelet.go:2306] "Pod admission denied" podUID="5fcaf039-b000-43a8-9f56-eb8d1703199e" pod="tigera-operator/tigera-operator-6f6897fdc5-69rsx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:11.677787 kubelet[2701]: I0515 12:54:11.677725 2701 kubelet.go:2306] "Pod admission denied" podUID="48b9c871-d0a3-4c46-949b-aee4e00df158" pod="tigera-operator/tigera-operator-6f6897fdc5-f4rts" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:11.888481 kubelet[2701]: I0515 12:54:11.887652 2701 kubelet.go:2306] "Pod admission denied" podUID="49cd1e23-9577-40bc-867b-3d246682a49e" pod="tigera-operator/tigera-operator-6f6897fdc5-5rczr" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:11.976877 kubelet[2701]: I0515 12:54:11.976829 2701 kubelet.go:2306] "Pod admission denied" podUID="99a16336-ad79-4687-90ae-74a0a772011d" pod="tigera-operator/tigera-operator-6f6897fdc5-5dkjb" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:12.030525 kubelet[2701]: I0515 12:54:12.030449 2701 kubelet.go:2306] "Pod admission denied" podUID="ce3eb5df-7cfa-4df0-8d4f-5a60c380c15c" pod="tigera-operator/tigera-operator-6f6897fdc5-5lw8h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:12.135233 kubelet[2701]: I0515 12:54:12.135204 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:54:12.136505 kubelet[2701]: I0515 12:54:12.135501 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:54:12.142485 kubelet[2701]: I0515 12:54:12.142372 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:54:12.155793 kubelet[2701]: I0515 12:54:12.154217 2701 kubelet.go:2306] "Pod admission denied" podUID="c36e9a36-54d1-496b-a740-cf39332d4695" pod="tigera-operator/tigera-operator-6f6897fdc5-p45ws" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:12.169506 kubelet[2701]: I0515 12:54:12.169352 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:54:12.169506 kubelet[2701]: I0515 12:54:12.169441 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-hfrjd","kube-system/coredns-6f6b679f8f-svgwx","calico-system/calico-kube-controllers-6cff7bc5b6-sff94","calico-system/csi-node-driver-d7hrh","calico-system/calico-node-qxkgl","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:54:12.170191 kubelet[2701]: E0515 12:54:12.169715 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:12.170191 kubelet[2701]: E0515 12:54:12.169866 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:12.170191 kubelet[2701]: E0515 12:54:12.169877 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:12.170191 kubelet[2701]: E0515 12:54:12.169884 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:12.170191 kubelet[2701]: E0515 12:54:12.169890 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:54:12.170191 kubelet[2701]: E0515 12:54:12.169902 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:54:12.170191 kubelet[2701]: E0515 12:54:12.169912 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:54:12.170191 kubelet[2701]: E0515 12:54:12.169922 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhz8r" May 15 12:54:12.170191 kubelet[2701]: E0515 12:54:12.169931 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:54:12.170191 kubelet[2701]: E0515 12:54:12.169940 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:54:12.170191 kubelet[2701]: I0515 12:54:12.169950 2701 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 12:54:12.328570 kubelet[2701]: I0515 12:54:12.328515 2701 kubelet.go:2306] "Pod admission denied" podUID="49ca0dda-3bd5-4775-9ed6-3da949ad2836" pod="tigera-operator/tigera-operator-6f6897fdc5-stq5x" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:12.431026 kubelet[2701]: I0515 12:54:12.430904 2701 kubelet.go:2306] "Pod admission denied" podUID="efe339ac-a454-4446-a4ec-62cdf4c60234" pod="tigera-operator/tigera-operator-6f6897fdc5-lbgcq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:12.534394 kubelet[2701]: I0515 12:54:12.534333 2701 kubelet.go:2306] "Pod admission denied" podUID="dc247f98-e392-4250-b847-d23c33a97678" pod="tigera-operator/tigera-operator-6f6897fdc5-kk5ht" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:12.628536 kubelet[2701]: I0515 12:54:12.628475 2701 kubelet.go:2306] "Pod admission denied" podUID="1c5c00de-1766-4447-a10b-2009688ceb05" pod="tigera-operator/tigera-operator-6f6897fdc5-5dnzw" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:12.731441 kubelet[2701]: I0515 12:54:12.730530 2701 kubelet.go:2306] "Pod admission denied" podUID="7ed8a1d5-71db-443a-83ac-d2a302126e98" pod="tigera-operator/tigera-operator-6f6897fdc5-n5dbk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:12.837743 kubelet[2701]: I0515 12:54:12.837696 2701 kubelet.go:2306] "Pod admission denied" podUID="cfb5a030-203e-41d8-8a88-92e063c7668e" pod="tigera-operator/tigera-operator-6f6897fdc5-p6l7k" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:12.884567 kubelet[2701]: I0515 12:54:12.884514 2701 kubelet.go:2306] "Pod admission denied" podUID="c92492ab-cf7c-4de0-8a04-a4526f839491" pod="tigera-operator/tigera-operator-6f6897fdc5-c5xl7" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:12.979445 kubelet[2701]: I0515 12:54:12.979383 2701 kubelet.go:2306] "Pod admission denied" podUID="7a38d927-e895-4ba9-bfbf-bdada8115693" pod="tigera-operator/tigera-operator-6f6897fdc5-x7dbq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:13.183574 kubelet[2701]: I0515 12:54:13.182837 2701 kubelet.go:2306] "Pod admission denied" podUID="fbb0b36c-602a-4dde-907d-0f64d1bc7fa4" pod="tigera-operator/tigera-operator-6f6897fdc5-q8krx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:13.279569 kubelet[2701]: I0515 12:54:13.279502 2701 kubelet.go:2306] "Pod admission denied" podUID="96376e9a-d6b2-4c33-8438-f5bba7992dfe" pod="tigera-operator/tigera-operator-6f6897fdc5-jsf64" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:13.327105 kubelet[2701]: I0515 12:54:13.327041 2701 kubelet.go:2306] "Pod admission denied" podUID="297038bf-099c-4885-921d-f21233830f0a" pod="tigera-operator/tigera-operator-6f6897fdc5-h6765" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:13.435768 kubelet[2701]: I0515 12:54:13.434685 2701 kubelet.go:2306] "Pod admission denied" podUID="c1efc2e3-199f-40ea-a63a-fbc7375daab1" pod="tigera-operator/tigera-operator-6f6897fdc5-9tm8h" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:13.531021 kubelet[2701]: I0515 12:54:13.530962 2701 kubelet.go:2306] "Pod admission denied" podUID="c7a49db8-2753-4722-94f8-c1b96b5a1121" pod="tigera-operator/tigera-operator-6f6897fdc5-7p8c4" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:13.579536 kubelet[2701]: I0515 12:54:13.579487 2701 kubelet.go:2306] "Pod admission denied" podUID="904a5184-5f47-4de8-a813-04c84ccc760f" pod="tigera-operator/tigera-operator-6f6897fdc5-rlm8s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:13.685476 kubelet[2701]: I0515 12:54:13.685400 2701 kubelet.go:2306] "Pod admission denied" podUID="64923df2-5ec4-478b-9625-84c275179c38" pod="tigera-operator/tigera-operator-6f6897fdc5-rq2sc" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:13.880156 kubelet[2701]: I0515 12:54:13.880093 2701 kubelet.go:2306] "Pod admission denied" podUID="7f2e3b46-a3a4-43d1-b01e-09916d41e74e" pod="tigera-operator/tigera-operator-6f6897fdc5-54vqk" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:13.981298 kubelet[2701]: I0515 12:54:13.981254 2701 kubelet.go:2306] "Pod admission denied" podUID="d62784fb-6d33-423d-b32c-81b007e423be" pod="tigera-operator/tigera-operator-6f6897fdc5-bv8hl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:14.085412 kubelet[2701]: I0515 12:54:14.085170 2701 kubelet.go:2306] "Pod admission denied" podUID="ed02008a-0625-4434-a230-8c7e82de1895" pod="tigera-operator/tigera-operator-6f6897fdc5-tsfbv" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:14.183417 kubelet[2701]: I0515 12:54:14.182950 2701 kubelet.go:2306] "Pod admission denied" podUID="9483929e-a5ea-43e6-95ee-cb486211034c" pod="tigera-operator/tigera-operator-6f6897fdc5-k9tjf" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:14.230087 kubelet[2701]: I0515 12:54:14.230043 2701 kubelet.go:2306] "Pod admission denied" podUID="d9f359d8-c780-4f67-8249-67262ad16190" pod="tigera-operator/tigera-operator-6f6897fdc5-dccmq" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:14.332480 kubelet[2701]: I0515 12:54:14.332362 2701 kubelet.go:2306] "Pod admission denied" podUID="66e7d743-4b3c-4e9a-a53c-342ee999027e" pod="tigera-operator/tigera-operator-6f6897fdc5-d8fwl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:14.532370 kubelet[2701]: I0515 12:54:14.532250 2701 kubelet.go:2306] "Pod admission denied" podUID="0f3d241d-879c-4c3a-8995-93c907e3c1cb" pod="tigera-operator/tigera-operator-6f6897fdc5-47s2s" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:14.628283 kubelet[2701]: I0515 12:54:14.628230 2701 kubelet.go:2306] "Pod admission denied" podUID="23146025-61be-4977-bd9e-d55dde60f004" pod="tigera-operator/tigera-operator-6f6897fdc5-4jd7d" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:14.736474 kubelet[2701]: I0515 12:54:14.735544 2701 kubelet.go:2306] "Pod admission denied" podUID="419b2f82-e5cb-4845-abb4-eb7e000da5b2" pod="tigera-operator/tigera-operator-6f6897fdc5-gg9lx" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:14.828894 kubelet[2701]: I0515 12:54:14.828856 2701 kubelet.go:2306] "Pod admission denied" podUID="aa989bc2-fdae-4fc6-bccf-0864ad80807e" pod="tigera-operator/tigera-operator-6f6897fdc5-pkksl" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:14.928511 kubelet[2701]: I0515 12:54:14.928447 2701 kubelet.go:2306] "Pod admission denied" podUID="7c6e9fcd-39ed-4987-b7fe-662f0e670475" pod="tigera-operator/tigera-operator-6f6897fdc5-h8h8p" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:15.070483 kubelet[2701]: I0515 12:54:15.068890 2701 kubelet.go:2306] "Pod admission denied" podUID="a65e9b60-37ec-44c5-83f6-78f54c02bfa5" pod="tigera-operator/tigera-operator-6f6897fdc5-7sdb8" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:15.186653 kubelet[2701]: I0515 12:54:15.186549 2701 kubelet.go:2306] "Pod admission denied" podUID="c4218b0f-cb11-43d3-be0f-12a2a73fbb37" pod="tigera-operator/tigera-operator-6f6897fdc5-6rbww" reason="Evicted" message="The node had condition: [DiskPressure]. " May 15 12:54:15.468131 systemd[1]: Started sshd@10-172.232.9.197:22-139.178.89.65:47010.service - OpenSSH per-connection server daemon (139.178.89.65:47010). May 15 12:54:15.807666 sshd[4495]: Accepted publickey for core from 139.178.89.65 port 47010 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:15.808957 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:15.814125 systemd-logind[1520]: New session 8 of user core. May 15 12:54:15.825580 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 12:54:16.247733 sshd[4497]: Connection closed by 139.178.89.65 port 47010 May 15 12:54:16.248644 sshd-session[4495]: pam_unix(sshd:session): session closed for user core May 15 12:54:16.252937 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. May 15 12:54:16.253445 systemd[1]: sshd@10-172.232.9.197:22-139.178.89.65:47010.service: Deactivated successfully. May 15 12:54:16.256507 systemd[1]: session-8.scope: Deactivated successfully. May 15 12:54:16.259269 systemd-logind[1520]: Removed session 8. May 15 12:54:18.424486 containerd[1536]: time="2025-05-15T12:54:18.424437718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:54:18.481826 containerd[1536]: time="2025-05-15T12:54:18.481783734Z" level=error msg="Failed to destroy network for sandbox \"43a59ed31e9b681bf3c7af176b3504efde92cec5e2f9fb81e41e5a0d90ffa4b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:18.486420 systemd[1]: run-netns-cni\x2d9e03dda7\x2daf37\x2d886d\x2dcc28\x2db82ea7fa4e8a.mount: Deactivated successfully. May 15 12:54:18.487710 kubelet[2701]: E0515 12:54:18.486832 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a59ed31e9b681bf3c7af176b3504efde92cec5e2f9fb81e41e5a0d90ffa4b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:18.487710 kubelet[2701]: E0515 12:54:18.486930 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a59ed31e9b681bf3c7af176b3504efde92cec5e2f9fb81e41e5a0d90ffa4b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:18.487710 kubelet[2701]: E0515 12:54:18.487547 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a59ed31e9b681bf3c7af176b3504efde92cec5e2f9fb81e41e5a0d90ffa4b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:18.488188 containerd[1536]: time="2025-05-15T12:54:18.486568422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a59ed31e9b681bf3c7af176b3504efde92cec5e2f9fb81e41e5a0d90ffa4b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:18.489758 kubelet[2701]: E0515 12:54:18.487591 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43a59ed31e9b681bf3c7af176b3504efde92cec5e2f9fb81e41e5a0d90ffa4b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:54:21.316523 systemd[1]: Started sshd@11-172.232.9.197:22-139.178.89.65:56554.service - OpenSSH per-connection server daemon (139.178.89.65:56554). May 15 12:54:21.666961 sshd[4542]: Accepted publickey for core from 139.178.89.65 port 56554 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:21.668949 sshd-session[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:21.673926 systemd-logind[1520]: New session 9 of user core. May 15 12:54:21.680597 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 12:54:21.980062 sshd[4544]: Connection closed by 139.178.89.65 port 56554 May 15 12:54:21.981294 sshd-session[4542]: pam_unix(sshd:session): session closed for user core May 15 12:54:21.985579 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. May 15 12:54:21.986545 systemd[1]: sshd@11-172.232.9.197:22-139.178.89.65:56554.service: Deactivated successfully. May 15 12:54:21.989061 systemd[1]: session-9.scope: Deactivated successfully. May 15 12:54:21.991293 systemd-logind[1520]: Removed session 9. May 15 12:54:22.185997 kubelet[2701]: I0515 12:54:22.185946 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:54:22.185997 kubelet[2701]: I0515 12:54:22.185988 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:54:22.188074 kubelet[2701]: I0515 12:54:22.188046 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:54:22.189842 kubelet[2701]: I0515 12:54:22.189642 2701 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" size=18182961 runtimeHandler="" May 15 12:54:22.191018 containerd[1536]: time="2025-05-15T12:54:22.190276658Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 12:54:22.191509 containerd[1536]: time="2025-05-15T12:54:22.191486341Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 12:54:22.192325 containerd[1536]: time="2025-05-15T12:54:22.192245261Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"" May 15 12:54:22.193250 containerd[1536]: time="2025-05-15T12:54:22.192909492Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" returns successfully" May 15 12:54:22.193332 containerd[1536]: time="2025-05-15T12:54:22.192949762Z" level=info msg="ImageDelete event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 12:54:22.193546 kubelet[2701]: I0515 12:54:22.193509 2701 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" size=56909194 runtimeHandler="" May 15 12:54:22.193708 containerd[1536]: time="2025-05-15T12:54:22.193672494Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 12:54:22.194610 containerd[1536]: time="2025-05-15T12:54:22.194585455Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.15-0\"" May 15 12:54:22.195278 containerd[1536]: time="2025-05-15T12:54:22.195232467Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"" May 15 12:54:22.195561 containerd[1536]: time="2025-05-15T12:54:22.195539837Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" returns successfully" May 15 12:54:22.195668 containerd[1536]: time="2025-05-15T12:54:22.195621197Z" level=info msg="ImageDelete event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 12:54:22.204404 kubelet[2701]: I0515 12:54:22.204385 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:54:22.204519 kubelet[2701]: I0515 12:54:22.204498 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6cff7bc5b6-sff94","kube-system/coredns-6f6b679f8f-hfrjd","kube-system/coredns-6f6b679f8f-svgwx","calico-system/csi-node-driver-d7hrh","calico-system/calico-node-qxkgl","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:54:22.204601 kubelet[2701]: E0515 12:54:22.204527 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:22.204601 kubelet[2701]: E0515 12:54:22.204537 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:22.204601 kubelet[2701]: E0515 12:54:22.204544 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:22.204601 kubelet[2701]: E0515 12:54:22.204549 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:22.204601 kubelet[2701]: E0515 12:54:22.204555 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:54:22.204601 kubelet[2701]: E0515 12:54:22.204565 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:54:22.204601 kubelet[2701]: E0515 12:54:22.204572 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:54:22.204601 kubelet[2701]: E0515 12:54:22.204580 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhz8r" May 15 12:54:22.204601 kubelet[2701]: E0515 12:54:22.204588 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:54:22.204601 kubelet[2701]: E0515 12:54:22.204596 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:54:22.204601 kubelet[2701]: I0515 12:54:22.204606 2701 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 12:54:22.408438 kubelet[2701]: E0515 12:54:22.408396 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:22.418930 kubelet[2701]: E0515 12:54:22.418739 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-qxkgl" podUID="f69f6551-7509-4bf1-a40a-1170e25b66f0" May 15 12:54:23.408488 kubelet[2701]: E0515 12:54:23.408340 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:23.410156 kubelet[2701]: E0515 12:54:23.408574 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:23.410238 containerd[1536]: time="2025-05-15T12:54:23.408892325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:54:23.410238 containerd[1536]: time="2025-05-15T12:54:23.409408056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:54:23.410238 containerd[1536]: time="2025-05-15T12:54:23.409581856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:54:23.514198 containerd[1536]: time="2025-05-15T12:54:23.513947330Z" level=error msg="Failed to destroy network for sandbox \"3c180b10d2129e1b62fc054870216ff3455f274d6049a0909d0b45a392c5fbe3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:23.518098 systemd[1]: run-netns-cni\x2def97ad58\x2d3f68\x2d9214\x2dab03\x2ddff7ef0a9a81.mount: Deactivated successfully. May 15 12:54:23.520709 containerd[1536]: time="2025-05-15T12:54:23.520585150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c180b10d2129e1b62fc054870216ff3455f274d6049a0909d0b45a392c5fbe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:23.521669 kubelet[2701]: E0515 12:54:23.521520 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c180b10d2129e1b62fc054870216ff3455f274d6049a0909d0b45a392c5fbe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:23.521669 kubelet[2701]: E0515 12:54:23.521600 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c180b10d2129e1b62fc054870216ff3455f274d6049a0909d0b45a392c5fbe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:23.522117 kubelet[2701]: E0515 12:54:23.521864 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c180b10d2129e1b62fc054870216ff3455f274d6049a0909d0b45a392c5fbe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:23.522827 kubelet[2701]: E0515 12:54:23.522361 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c180b10d2129e1b62fc054870216ff3455f274d6049a0909d0b45a392c5fbe3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfrjd" podUID="be5ba310-2fcd-4763-b9cf-8dba85ce0f76" May 15 12:54:23.535371 containerd[1536]: time="2025-05-15T12:54:23.533244929Z" level=error msg="Failed to destroy network for sandbox \"272f32b0e901da8bf39a9eea0ceb4c36567ab106feada068d3479e09860cbb21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:23.535312 systemd[1]: run-netns-cni\x2d1a39faa6\x2dbc87\x2d6b2b\x2d3f82\x2dba190bbcd966.mount: Deactivated successfully. May 15 12:54:23.536566 containerd[1536]: time="2025-05-15T12:54:23.536537705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"272f32b0e901da8bf39a9eea0ceb4c36567ab106feada068d3479e09860cbb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:23.536971 containerd[1536]: time="2025-05-15T12:54:23.536895945Z" level=error msg="Failed to destroy network for sandbox \"15addfde1934ee845ec83937e1b5c64ed1a2ed96858f9cd622b457d25a59eb2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:23.538745 kubelet[2701]: E0515 12:54:23.537314 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"272f32b0e901da8bf39a9eea0ceb4c36567ab106feada068d3479e09860cbb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:23.538745 kubelet[2701]: E0515 12:54:23.537394 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"272f32b0e901da8bf39a9eea0ceb4c36567ab106feada068d3479e09860cbb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:23.538745 kubelet[2701]: E0515 12:54:23.537420 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"272f32b0e901da8bf39a9eea0ceb4c36567ab106feada068d3479e09860cbb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:23.538745 kubelet[2701]: E0515 12:54:23.537943 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"272f32b0e901da8bf39a9eea0ceb4c36567ab106feada068d3479e09860cbb21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:54:23.539608 containerd[1536]: time="2025-05-15T12:54:23.539583949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15addfde1934ee845ec83937e1b5c64ed1a2ed96858f9cd622b457d25a59eb2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:23.540304 kubelet[2701]: E0515 12:54:23.540217 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15addfde1934ee845ec83937e1b5c64ed1a2ed96858f9cd622b457d25a59eb2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:23.540359 kubelet[2701]: E0515 12:54:23.540324 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15addfde1934ee845ec83937e1b5c64ed1a2ed96858f9cd622b457d25a59eb2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:23.540359 kubelet[2701]: E0515 12:54:23.540354 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15addfde1934ee845ec83937e1b5c64ed1a2ed96858f9cd622b457d25a59eb2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:23.540519 kubelet[2701]: E0515 12:54:23.540398 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15addfde1934ee845ec83937e1b5c64ed1a2ed96858f9cd622b457d25a59eb2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-svgwx" podUID="a44ca807-6ed0-447f-9c4f-1a10e61b025b" May 15 12:54:24.413445 systemd[1]: run-netns-cni\x2d1bd15784\x2de259\x2d64d0\x2d7c80\x2dcd7f98e2ef58.mount: Deactivated successfully. May 15 12:54:27.051280 systemd[1]: Started sshd@12-172.232.9.197:22-139.178.89.65:54750.service - OpenSSH per-connection server daemon (139.178.89.65:54750). May 15 12:54:27.394729 sshd[4654]: Accepted publickey for core from 139.178.89.65 port 54750 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:27.396234 sshd-session[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:27.402661 systemd-logind[1520]: New session 10 of user core. May 15 12:54:27.408657 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 12:54:27.706075 sshd[4656]: Connection closed by 139.178.89.65 port 54750 May 15 12:54:27.706647 sshd-session[4654]: pam_unix(sshd:session): session closed for user core May 15 12:54:27.711880 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. May 15 12:54:27.712620 systemd[1]: sshd@12-172.232.9.197:22-139.178.89.65:54750.service: Deactivated successfully. May 15 12:54:27.715300 systemd[1]: session-10.scope: Deactivated successfully. May 15 12:54:27.717396 systemd-logind[1520]: Removed session 10. May 15 12:54:32.408656 containerd[1536]: time="2025-05-15T12:54:32.408589451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:54:32.459045 containerd[1536]: time="2025-05-15T12:54:32.458977381Z" level=error msg="Failed to destroy network for sandbox \"179c8c028e7196630281d7240dadc0b0b81ca79a7296dcc053d52e638e798701\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:32.460911 systemd[1]: run-netns-cni\x2df87d3186\x2d02bb\x2d2987\x2d0efa\x2d90a7bbc28921.mount: Deactivated successfully. May 15 12:54:32.462701 containerd[1536]: time="2025-05-15T12:54:32.462619556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"179c8c028e7196630281d7240dadc0b0b81ca79a7296dcc053d52e638e798701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:32.475153 kubelet[2701]: E0515 12:54:32.475081 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"179c8c028e7196630281d7240dadc0b0b81ca79a7296dcc053d52e638e798701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:32.475490 kubelet[2701]: E0515 12:54:32.475405 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"179c8c028e7196630281d7240dadc0b0b81ca79a7296dcc053d52e638e798701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:32.475673 kubelet[2701]: E0515 12:54:32.475611 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"179c8c028e7196630281d7240dadc0b0b81ca79a7296dcc053d52e638e798701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:32.475816 kubelet[2701]: E0515 12:54:32.475760 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"179c8c028e7196630281d7240dadc0b0b81ca79a7296dcc053d52e638e798701\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:54:32.766608 systemd[1]: Started sshd@13-172.232.9.197:22-139.178.89.65:54752.service - OpenSSH per-connection server daemon (139.178.89.65:54752). May 15 12:54:33.111262 sshd[4699]: Accepted publickey for core from 139.178.89.65 port 54752 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:33.112765 sshd-session[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:33.119556 systemd-logind[1520]: New session 11 of user core. May 15 12:54:33.123596 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 12:54:33.407959 kubelet[2701]: E0515 12:54:33.407693 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:33.408878 kubelet[2701]: E0515 12:54:33.408828 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.29.3\\\"\"" pod="calico-system/calico-node-qxkgl" podUID="f69f6551-7509-4bf1-a40a-1170e25b66f0" May 15 12:54:33.419054 sshd[4701]: Connection closed by 139.178.89.65 port 54752 May 15 12:54:33.419722 sshd-session[4699]: pam_unix(sshd:session): session closed for user core May 15 12:54:33.424780 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. May 15 12:54:33.425731 systemd[1]: sshd@13-172.232.9.197:22-139.178.89.65:54752.service: Deactivated successfully. May 15 12:54:33.428364 systemd[1]: session-11.scope: Deactivated successfully. May 15 12:54:33.430983 systemd-logind[1520]: Removed session 11. May 15 12:54:33.484096 systemd[1]: Started sshd@14-172.232.9.197:22-139.178.89.65:54766.service - OpenSSH per-connection server daemon (139.178.89.65:54766). May 15 12:54:33.827474 sshd[4714]: Accepted publickey for core from 139.178.89.65 port 54766 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:33.828777 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:33.833640 systemd-logind[1520]: New session 12 of user core. May 15 12:54:33.847574 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 12:54:34.168942 sshd[4716]: Connection closed by 139.178.89.65 port 54766 May 15 12:54:34.169706 sshd-session[4714]: pam_unix(sshd:session): session closed for user core May 15 12:54:34.175657 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. May 15 12:54:34.176491 systemd[1]: sshd@14-172.232.9.197:22-139.178.89.65:54766.service: Deactivated successfully. May 15 12:54:34.179109 systemd[1]: session-12.scope: Deactivated successfully. May 15 12:54:34.182051 systemd-logind[1520]: Removed session 12. May 15 12:54:34.228529 systemd[1]: Started sshd@15-172.232.9.197:22-139.178.89.65:54776.service - OpenSSH per-connection server daemon (139.178.89.65:54776). May 15 12:54:34.569411 sshd[4726]: Accepted publickey for core from 139.178.89.65 port 54776 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:34.570945 sshd-session[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:34.577189 systemd-logind[1520]: New session 13 of user core. May 15 12:54:34.584580 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 12:54:34.876855 sshd[4728]: Connection closed by 139.178.89.65 port 54776 May 15 12:54:34.877608 sshd-session[4726]: pam_unix(sshd:session): session closed for user core May 15 12:54:34.882259 systemd[1]: sshd@15-172.232.9.197:22-139.178.89.65:54776.service: Deactivated successfully. May 15 12:54:34.884278 systemd[1]: session-13.scope: Deactivated successfully. May 15 12:54:34.885094 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. May 15 12:54:34.886905 systemd-logind[1520]: Removed session 13. May 15 12:54:36.408257 kubelet[2701]: E0515 12:54:36.406998 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:36.408823 containerd[1536]: time="2025-05-15T12:54:36.408674099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:54:36.409396 kubelet[2701]: E0515 12:54:36.409375 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:36.409709 containerd[1536]: time="2025-05-15T12:54:36.409609630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:54:36.410064 kubelet[2701]: E0515 12:54:36.410044 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:36.481562 containerd[1536]: time="2025-05-15T12:54:36.481520726Z" level=error msg="Failed to destroy network for sandbox \"9f20991c2309709b3b1e4ab0392f61553656a7c7087766f1cb78b45e66e98eac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:36.482888 containerd[1536]: time="2025-05-15T12:54:36.482563017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f20991c2309709b3b1e4ab0392f61553656a7c7087766f1cb78b45e66e98eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:36.482965 kubelet[2701]: E0515 12:54:36.482776 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f20991c2309709b3b1e4ab0392f61553656a7c7087766f1cb78b45e66e98eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:36.482965 kubelet[2701]: E0515 12:54:36.482828 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f20991c2309709b3b1e4ab0392f61553656a7c7087766f1cb78b45e66e98eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:36.482965 kubelet[2701]: E0515 12:54:36.482860 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f20991c2309709b3b1e4ab0392f61553656a7c7087766f1cb78b45e66e98eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:36.484350 kubelet[2701]: E0515 12:54:36.483138 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f20991c2309709b3b1e4ab0392f61553656a7c7087766f1cb78b45e66e98eac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfrjd" podUID="be5ba310-2fcd-4763-b9cf-8dba85ce0f76" May 15 12:54:36.486340 systemd[1]: run-netns-cni\x2d38c3dc20\x2dd45c\x2df10c\x2d1d20\x2db3e8b8918b12.mount: Deactivated successfully. May 15 12:54:36.492597 containerd[1536]: time="2025-05-15T12:54:36.492568771Z" level=error msg="Failed to destroy network for sandbox \"540e0ab1690ca6438e1e239f5a38c9abd49c77c156b8e49bb389234d5229da7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:36.496320 containerd[1536]: time="2025-05-15T12:54:36.493262622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"540e0ab1690ca6438e1e239f5a38c9abd49c77c156b8e49bb389234d5229da7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:36.495215 systemd[1]: run-netns-cni\x2d8376c597\x2d94e3\x2dc067\x2d9f3b\x2d5aee92fb3ed5.mount: Deactivated successfully. May 15 12:54:36.496504 kubelet[2701]: E0515 12:54:36.493504 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"540e0ab1690ca6438e1e239f5a38c9abd49c77c156b8e49bb389234d5229da7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:36.496504 kubelet[2701]: E0515 12:54:36.493597 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"540e0ab1690ca6438e1e239f5a38c9abd49c77c156b8e49bb389234d5229da7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:36.496504 kubelet[2701]: E0515 12:54:36.493614 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"540e0ab1690ca6438e1e239f5a38c9abd49c77c156b8e49bb389234d5229da7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:36.496504 kubelet[2701]: E0515 12:54:36.493683 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"540e0ab1690ca6438e1e239f5a38c9abd49c77c156b8e49bb389234d5229da7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-svgwx" podUID="a44ca807-6ed0-447f-9c4f-1a10e61b025b" May 15 12:54:37.408108 containerd[1536]: time="2025-05-15T12:54:37.407955277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:54:37.459091 containerd[1536]: time="2025-05-15T12:54:37.459041623Z" level=error msg="Failed to destroy network for sandbox \"9a0674722a38f340dd788470ec3c2d1220d8488f413be24587daf7b9257589f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:37.460993 systemd[1]: run-netns-cni\x2d127b7c57\x2d6f8f\x2dcd86\x2d7f9c\x2db8ff3d90e640.mount: Deactivated successfully. May 15 12:54:37.462565 containerd[1536]: time="2025-05-15T12:54:37.462391318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a0674722a38f340dd788470ec3c2d1220d8488f413be24587daf7b9257589f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:37.462839 kubelet[2701]: E0515 12:54:37.462807 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a0674722a38f340dd788470ec3c2d1220d8488f413be24587daf7b9257589f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:37.463665 kubelet[2701]: E0515 12:54:37.463135 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a0674722a38f340dd788470ec3c2d1220d8488f413be24587daf7b9257589f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:37.463665 kubelet[2701]: E0515 12:54:37.463162 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a0674722a38f340dd788470ec3c2d1220d8488f413be24587daf7b9257589f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:37.463665 kubelet[2701]: E0515 12:54:37.463402 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a0674722a38f340dd788470ec3c2d1220d8488f413be24587daf7b9257589f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:54:39.941083 systemd[1]: Started sshd@16-172.232.9.197:22-139.178.89.65:55340.service - OpenSSH per-connection server daemon (139.178.89.65:55340). May 15 12:54:40.289036 sshd[4834]: Accepted publickey for core from 139.178.89.65 port 55340 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:40.290786 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:40.304120 systemd-logind[1520]: New session 14 of user core. May 15 12:54:40.312808 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 12:54:40.601529 sshd[4836]: Connection closed by 139.178.89.65 port 55340 May 15 12:54:40.602915 sshd-session[4834]: pam_unix(sshd:session): session closed for user core May 15 12:54:40.606275 systemd[1]: sshd@16-172.232.9.197:22-139.178.89.65:55340.service: Deactivated successfully. May 15 12:54:40.611276 systemd[1]: session-14.scope: Deactivated successfully. May 15 12:54:40.615218 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. May 15 12:54:40.618930 systemd-logind[1520]: Removed session 14. May 15 12:54:43.408548 containerd[1536]: time="2025-05-15T12:54:43.408501045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:54:43.479032 containerd[1536]: time="2025-05-15T12:54:43.478979993Z" level=error msg="Failed to destroy network for sandbox \"59deb029536f7b8b2d1b038c9ffba82006e0b1e3250aa34f4f7c1789d1c4582a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:43.480888 systemd[1]: run-netns-cni\x2d6c2cf6de\x2d09d5\x2d4c7d\x2dc133\x2d55dd15578040.mount: Deactivated successfully. May 15 12:54:43.482965 containerd[1536]: time="2025-05-15T12:54:43.482909188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59deb029536f7b8b2d1b038c9ffba82006e0b1e3250aa34f4f7c1789d1c4582a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:43.483308 kubelet[2701]: E0515 12:54:43.483242 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59deb029536f7b8b2d1b038c9ffba82006e0b1e3250aa34f4f7c1789d1c4582a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:43.483809 kubelet[2701]: E0515 12:54:43.483309 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59deb029536f7b8b2d1b038c9ffba82006e0b1e3250aa34f4f7c1789d1c4582a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:43.483809 kubelet[2701]: E0515 12:54:43.483330 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59deb029536f7b8b2d1b038c9ffba82006e0b1e3250aa34f4f7c1789d1c4582a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:43.483809 kubelet[2701]: E0515 12:54:43.483384 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d7hrh_calico-system(30c809cf-5d96-45fe-9af3-2a80162d2f28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59deb029536f7b8b2d1b038c9ffba82006e0b1e3250aa34f4f7c1789d1c4582a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d7hrh" podUID="30c809cf-5d96-45fe-9af3-2a80162d2f28" May 15 12:54:45.666929 systemd[1]: Started sshd@17-172.232.9.197:22-139.178.89.65:55352.service - OpenSSH per-connection server daemon (139.178.89.65:55352). May 15 12:54:46.002141 sshd[4876]: Accepted publickey for core from 139.178.89.65 port 55352 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:46.003754 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:46.008687 systemd-logind[1520]: New session 15 of user core. May 15 12:54:46.015597 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 12:54:46.320946 sshd[4878]: Connection closed by 139.178.89.65 port 55352 May 15 12:54:46.321780 sshd-session[4876]: pam_unix(sshd:session): session closed for user core May 15 12:54:46.326743 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. May 15 12:54:46.327608 systemd[1]: sshd@17-172.232.9.197:22-139.178.89.65:55352.service: Deactivated successfully. May 15 12:54:46.330174 systemd[1]: session-15.scope: Deactivated successfully. May 15 12:54:46.332272 systemd-logind[1520]: Removed session 15. May 15 12:54:47.408007 kubelet[2701]: E0515 12:54:47.407968 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:47.409251 containerd[1536]: time="2025-05-15T12:54:47.409210199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 12:54:48.412541 kubelet[2701]: E0515 12:54:48.412163 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:50.409830 kubelet[2701]: E0515 12:54:50.408891 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:50.410310 containerd[1536]: time="2025-05-15T12:54:50.409891011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:54:50.473847 containerd[1536]: time="2025-05-15T12:54:50.473772707Z" level=error msg="Failed to destroy network for sandbox \"f4bb2dac0a69300a67296b5fffbbc6ec18446004e45f1bb6bb7e6e456cf46458\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:50.477618 containerd[1536]: time="2025-05-15T12:54:50.477576981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4bb2dac0a69300a67296b5fffbbc6ec18446004e45f1bb6bb7e6e456cf46458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:50.477920 kubelet[2701]: E0515 12:54:50.477888 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4bb2dac0a69300a67296b5fffbbc6ec18446004e45f1bb6bb7e6e456cf46458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:50.478055 kubelet[2701]: E0515 12:54:50.478037 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4bb2dac0a69300a67296b5fffbbc6ec18446004e45f1bb6bb7e6e456cf46458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:50.478122 kubelet[2701]: E0515 12:54:50.478102 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4bb2dac0a69300a67296b5fffbbc6ec18446004e45f1bb6bb7e6e456cf46458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:50.478424 kubelet[2701]: E0515 12:54:50.478203 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfrjd_kube-system(be5ba310-2fcd-4763-b9cf-8dba85ce0f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4bb2dac0a69300a67296b5fffbbc6ec18446004e45f1bb6bb7e6e456cf46458\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfrjd" podUID="be5ba310-2fcd-4763-b9cf-8dba85ce0f76" May 15 12:54:50.479058 systemd[1]: run-netns-cni\x2dc7e293d8\x2dbd8f\x2dbb9e\x2d139b\x2d0bd45aec7c03.mount: Deactivated successfully. May 15 12:54:51.385904 systemd[1]: Started sshd@18-172.232.9.197:22-139.178.89.65:39134.service - OpenSSH per-connection server daemon (139.178.89.65:39134). May 15 12:54:51.735655 sshd[4927]: Accepted publickey for core from 139.178.89.65 port 39134 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:51.737422 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:51.744774 systemd-logind[1520]: New session 16 of user core. May 15 12:54:51.749583 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 12:54:52.068616 sshd[4930]: Connection closed by 139.178.89.65 port 39134 May 15 12:54:52.068904 sshd-session[4927]: pam_unix(sshd:session): session closed for user core May 15 12:54:52.075055 systemd[1]: sshd@18-172.232.9.197:22-139.178.89.65:39134.service: Deactivated successfully. May 15 12:54:52.078409 systemd[1]: session-16.scope: Deactivated successfully. May 15 12:54:52.080658 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. May 15 12:54:52.082928 systemd-logind[1520]: Removed session 16. May 15 12:54:52.268119 kubelet[2701]: I0515 12:54:52.268076 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:54:52.268119 kubelet[2701]: I0515 12:54:52.268109 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:54:52.270892 kubelet[2701]: I0515 12:54:52.270864 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:54:52.287540 kubelet[2701]: I0515 12:54:52.287509 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:54:52.287650 kubelet[2701]: I0515 12:54:52.287609 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6cff7bc5b6-sff94","kube-system/coredns-6f6b679f8f-hfrjd","kube-system/coredns-6f6b679f8f-svgwx","calico-system/csi-node-driver-d7hrh","calico-system/calico-node-qxkgl","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:54:52.287795 kubelet[2701]: E0515 12:54:52.287657 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:52.287795 kubelet[2701]: E0515 12:54:52.287666 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:54:52.287795 kubelet[2701]: E0515 12:54:52.287673 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:52.287795 kubelet[2701]: E0515 12:54:52.287679 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:54:52.287795 kubelet[2701]: E0515 12:54:52.287687 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:54:52.287795 kubelet[2701]: E0515 12:54:52.287720 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:54:52.287795 kubelet[2701]: E0515 12:54:52.287728 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:54:52.287795 kubelet[2701]: E0515 12:54:52.287735 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhz8r" May 15 12:54:52.287795 kubelet[2701]: E0515 12:54:52.287743 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:54:52.287795 kubelet[2701]: E0515 12:54:52.287750 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:54:52.287795 kubelet[2701]: I0515 12:54:52.287758 2701 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 12:54:52.408952 kubelet[2701]: E0515 12:54:52.408910 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:52.410674 containerd[1536]: time="2025-05-15T12:54:52.410318878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:54:52.411543 containerd[1536]: time="2025-05-15T12:54:52.411223679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:54:52.516873 containerd[1536]: time="2025-05-15T12:54:52.516693121Z" level=error msg="Failed to destroy network for sandbox \"28b951e051a17af359da5943671bb39832cceb5dc54a53dc13de1f09cc413c2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:52.519315 systemd[1]: run-netns-cni\x2d6ce2323c\x2dbe9b\x2d4d6c\x2d9e60\x2d026246293cd0.mount: Deactivated successfully. May 15 12:54:52.520212 containerd[1536]: time="2025-05-15T12:54:52.520186386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b951e051a17af359da5943671bb39832cceb5dc54a53dc13de1f09cc413c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:52.521763 kubelet[2701]: E0515 12:54:52.521718 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b951e051a17af359da5943671bb39832cceb5dc54a53dc13de1f09cc413c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:52.521921 kubelet[2701]: E0515 12:54:52.521783 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b951e051a17af359da5943671bb39832cceb5dc54a53dc13de1f09cc413c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:52.521921 kubelet[2701]: E0515 12:54:52.521805 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28b951e051a17af359da5943671bb39832cceb5dc54a53dc13de1f09cc413c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:54:52.521921 kubelet[2701]: E0515 12:54:52.521843 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-svgwx_kube-system(a44ca807-6ed0-447f-9c4f-1a10e61b025b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28b951e051a17af359da5943671bb39832cceb5dc54a53dc13de1f09cc413c2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-svgwx" podUID="a44ca807-6ed0-447f-9c4f-1a10e61b025b" May 15 12:54:52.547682 containerd[1536]: time="2025-05-15T12:54:52.547625057Z" level=error msg="Failed to destroy network for sandbox \"b9e432fbf25625c44cfc8ebe0279621dba0adc14f88ba332ea1944e12d142195\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:52.550679 systemd[1]: run-netns-cni\x2d5c0aa17b\x2ddb2c\x2d05e8\x2dcb98\x2d06220397e501.mount: Deactivated successfully. May 15 12:54:52.551211 containerd[1536]: time="2025-05-15T12:54:52.551041051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e432fbf25625c44cfc8ebe0279621dba0adc14f88ba332ea1944e12d142195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:52.553083 kubelet[2701]: E0515 12:54:52.552717 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e432fbf25625c44cfc8ebe0279621dba0adc14f88ba332ea1944e12d142195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:54:52.553083 kubelet[2701]: E0515 12:54:52.552778 2701 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e432fbf25625c44cfc8ebe0279621dba0adc14f88ba332ea1944e12d142195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:52.553083 kubelet[2701]: E0515 12:54:52.552801 2701 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9e432fbf25625c44cfc8ebe0279621dba0adc14f88ba332ea1944e12d142195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:54:52.553083 kubelet[2701]: E0515 12:54:52.552844 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9e432fbf25625c44cfc8ebe0279621dba0adc14f88ba332ea1944e12d142195\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:54:53.925714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468753729.mount: Deactivated successfully. May 15 12:54:53.955668 containerd[1536]: time="2025-05-15T12:54:53.955105260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:54:53.955668 containerd[1536]: time="2025-05-15T12:54:53.955645631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 12:54:53.956148 containerd[1536]: time="2025-05-15T12:54:53.956119761Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:54:53.957526 containerd[1536]: time="2025-05-15T12:54:53.957505903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:54:53.957979 containerd[1536]: time="2025-05-15T12:54:53.957939723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.548692974s" May 15 12:54:53.958025 containerd[1536]: time="2025-05-15T12:54:53.957980603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 15 12:54:53.972715 containerd[1536]: time="2025-05-15T12:54:53.972681460Z" level=info msg="CreateContainer within sandbox \"e51c2b8cf5a3e81b67fe010150dff461cfefb91677bdd6c4499f51f57d3dcee2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 12:54:53.978630 containerd[1536]: time="2025-05-15T12:54:53.978597717Z" level=info msg="Container 3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92: CDI devices from CRI Config.CDIDevices: []" May 15 12:54:53.990140 containerd[1536]: time="2025-05-15T12:54:53.990108420Z" level=info msg="CreateContainer within sandbox \"e51c2b8cf5a3e81b67fe010150dff461cfefb91677bdd6c4499f51f57d3dcee2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92\"" May 15 12:54:53.990593 containerd[1536]: time="2025-05-15T12:54:53.990573140Z" level=info msg="StartContainer for \"3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92\"" May 15 12:54:53.991791 containerd[1536]: time="2025-05-15T12:54:53.991762752Z" level=info msg="connecting to shim 3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92" address="unix:///run/containerd/s/db215aaa64dd25f9aaecf2fa877acce3b58fc3006b91cf52e51663e75ec3ec90" protocol=ttrpc version=3 May 15 12:54:54.014631 systemd[1]: Started cri-containerd-3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92.scope - libcontainer container 3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92. May 15 12:54:54.069751 containerd[1536]: time="2025-05-15T12:54:54.069710161Z" level=info msg="StartContainer for \"3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92\" returns successfully" May 15 12:54:54.135997 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 12:54:54.136104 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 12:54:54.807222 kubelet[2701]: E0515 12:54:54.807103 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:54.836572 kubelet[2701]: I0515 12:54:54.836034 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qxkgl" podStartSLOduration=1.941217994 podStartE2EDuration="2m24.836019237s" podCreationTimestamp="2025-05-15 12:52:30 +0000 UTC" firstStartedPulling="2025-05-15 12:52:31.064090601 +0000 UTC m=+12.758000863" lastFinishedPulling="2025-05-15 12:54:53.958891844 +0000 UTC m=+155.652802106" observedRunningTime="2025-05-15 12:54:54.827040456 +0000 UTC m=+156.520950718" watchObservedRunningTime="2025-05-15 12:54:54.836019237 +0000 UTC m=+156.529929499" May 15 12:54:54.896172 containerd[1536]: time="2025-05-15T12:54:54.896125745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92\" id:\"b06c29bc71e509e3639afb27b373649b11746d0d3d509956d3201aec71e49506\" pid:5084 exit_status:1 exited_at:{seconds:1747313694 nanos:895777625}" May 15 12:54:55.407978 containerd[1536]: time="2025-05-15T12:54:55.407906835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,}" May 15 12:54:55.564881 systemd-networkd[1467]: calic52597fc9f3: Link UP May 15 12:54:55.565548 systemd-networkd[1467]: calic52597fc9f3: Gained carrier May 15 12:54:55.588665 containerd[1536]: 2025-05-15 12:54:55.438 [INFO][5123] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 12:54:55.588665 containerd[1536]: 2025-05-15 12:54:55.454 [INFO][5123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--9--197-k8s-csi--node--driver--d7hrh-eth0 csi-node-driver- calico-system 30c809cf-5d96-45fe-9af3-2a80162d2f28 597 0 2025-05-15 12:52:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-9-197 csi-node-driver-d7hrh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic52597fc9f3 [] []}} ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Namespace="calico-system" Pod="csi-node-driver-d7hrh" WorkloadEndpoint="172--232--9--197-k8s-csi--node--driver--d7hrh-" May 15 12:54:55.588665 containerd[1536]: 2025-05-15 12:54:55.454 [INFO][5123] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Namespace="calico-system" Pod="csi-node-driver-d7hrh" WorkloadEndpoint="172--232--9--197-k8s-csi--node--driver--d7hrh-eth0" May 15 12:54:55.588665 containerd[1536]: 2025-05-15 12:54:55.504 [INFO][5141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" HandleID="k8s-pod-network.1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Workload="172--232--9--197-k8s-csi--node--driver--d7hrh-eth0" May 15 12:54:55.588884 containerd[1536]: 2025-05-15 12:54:55.518 [INFO][5141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" HandleID="k8s-pod-network.1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Workload="172--232--9--197-k8s-csi--node--driver--d7hrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003194d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-9-197", "pod":"csi-node-driver-d7hrh", "timestamp":"2025-05-15 12:54:55.504574325 +0000 UTC"}, Hostname:"172-232-9-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:54:55.588884 containerd[1536]: 2025-05-15 12:54:55.518 [INFO][5141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:54:55.588884 containerd[1536]: 2025-05-15 12:54:55.518 [INFO][5141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:54:55.588884 containerd[1536]: 2025-05-15 12:54:55.518 [INFO][5141] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-9-197' May 15 12:54:55.588884 containerd[1536]: 2025-05-15 12:54:55.520 [INFO][5141] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" host="172-232-9-197" May 15 12:54:55.588884 containerd[1536]: 2025-05-15 12:54:55.525 [INFO][5141] ipam/ipam.go 372: Looking up existing affinities for host host="172-232-9-197" May 15 12:54:55.588884 containerd[1536]: 2025-05-15 12:54:55.531 [INFO][5141] ipam/ipam.go 489: Trying affinity for 192.168.85.192/26 host="172-232-9-197" May 15 12:54:55.588884 containerd[1536]: 2025-05-15 12:54:55.533 [INFO][5141] ipam/ipam.go 155: Attempting to load block cidr=192.168.85.192/26 host="172-232-9-197" May 15 12:54:55.588884 containerd[1536]: 2025-05-15 12:54:55.538 [INFO][5141] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="172-232-9-197" May 15 12:54:55.588884 containerd[1536]: 2025-05-15 12:54:55.538 [INFO][5141] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" host="172-232-9-197" May 15 12:54:55.589099 containerd[1536]: 2025-05-15 12:54:55.539 [INFO][5141] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31 May 15 12:54:55.589099 containerd[1536]: 2025-05-15 12:54:55.543 [INFO][5141] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" host="172-232-9-197" May 15 12:54:55.589099 containerd[1536]: 2025-05-15 12:54:55.548 [INFO][5141] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.85.193/26] block=192.168.85.192/26 handle="k8s-pod-network.1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" host="172-232-9-197" May 15 12:54:55.589099 containerd[1536]: 2025-05-15 12:54:55.548 [INFO][5141] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.85.193/26] handle="k8s-pod-network.1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" host="172-232-9-197" May 15 12:54:55.589099 containerd[1536]: 2025-05-15 12:54:55.548 [INFO][5141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:54:55.589099 containerd[1536]: 2025-05-15 12:54:55.549 [INFO][5141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.193/26] IPv6=[] ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" HandleID="k8s-pod-network.1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Workload="172--232--9--197-k8s-csi--node--driver--d7hrh-eth0" May 15 12:54:55.589212 containerd[1536]: 2025-05-15 12:54:55.553 [INFO][5123] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Namespace="calico-system" Pod="csi-node-driver-d7hrh" WorkloadEndpoint="172--232--9--197-k8s-csi--node--driver--d7hrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--9--197-k8s-csi--node--driver--d7hrh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30c809cf-5d96-45fe-9af3-2a80162d2f28", ResourceVersion:"597", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-9-197", ContainerID:"", Pod:"csi-node-driver-d7hrh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic52597fc9f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:54:55.589212 containerd[1536]: 2025-05-15 12:54:55.553 [INFO][5123] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.85.193/32] ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Namespace="calico-system" Pod="csi-node-driver-d7hrh" WorkloadEndpoint="172--232--9--197-k8s-csi--node--driver--d7hrh-eth0" May 15 12:54:55.589279 containerd[1536]: 2025-05-15 12:54:55.553 [INFO][5123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic52597fc9f3 ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Namespace="calico-system" Pod="csi-node-driver-d7hrh" WorkloadEndpoint="172--232--9--197-k8s-csi--node--driver--d7hrh-eth0" May 15 12:54:55.589279 containerd[1536]: 2025-05-15 12:54:55.566 [INFO][5123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Namespace="calico-system" Pod="csi-node-driver-d7hrh" WorkloadEndpoint="172--232--9--197-k8s-csi--node--driver--d7hrh-eth0" May 15 12:54:55.589322 containerd[1536]: 2025-05-15 12:54:55.566 [INFO][5123] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Namespace="calico-system" Pod="csi-node-driver-d7hrh" WorkloadEndpoint="172--232--9--197-k8s-csi--node--driver--d7hrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--9--197-k8s-csi--node--driver--d7hrh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30c809cf-5d96-45fe-9af3-2a80162d2f28", ResourceVersion:"597", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-9-197", ContainerID:"1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31", Pod:"csi-node-driver-d7hrh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic52597fc9f3", MAC:"fe:e3:6b:4f:bd:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:54:55.589365 containerd[1536]: 2025-05-15 12:54:55.582 [INFO][5123] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" Namespace="calico-system" Pod="csi-node-driver-d7hrh" WorkloadEndpoint="172--232--9--197-k8s-csi--node--driver--d7hrh-eth0" May 15 12:54:55.724276 containerd[1536]: time="2025-05-15T12:54:55.724126644Z" level=info msg="connecting to shim 1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31" address="unix:///run/containerd/s/079e39ee3e77303eba473096e5fa577032b5abe9c86a1ac9bb4f573eed31cd98" namespace=k8s.io protocol=ttrpc version=3 May 15 12:54:55.836213 kubelet[2701]: E0515 12:54:55.836136 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:54:55.930618 systemd[1]: Started cri-containerd-1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31.scope - libcontainer container 1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31. May 15 12:54:56.221199 containerd[1536]: time="2025-05-15T12:54:56.221140768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d7hrh,Uid:30c809cf-5d96-45fe-9af3-2a80162d2f28,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31\"" May 15 12:54:56.224584 containerd[1536]: time="2025-05-15T12:54:56.224560802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 12:54:56.338945 containerd[1536]: time="2025-05-15T12:54:56.338908350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92\" id:\"0f63aef6a43f170ecb09113566a02e1a56737291b18479f78f1e2c1a448041a3\" pid:5236 exit_status:1 exited_at:{seconds:1747313696 nanos:338536789}" May 15 12:54:56.539349 containerd[1536]: time="2025-05-15T12:54:56.539228576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92\" id:\"e3b91fa810a1eaefdb19e0bcceade0882a388b706b9b00ef7ad576af8608e573\" pid:5300 exit_status:1 exited_at:{seconds:1747313696 nanos:538953476}" May 15 12:54:56.826067 systemd-networkd[1467]: vxlan.calico: Link UP May 15 12:54:56.826078 systemd-networkd[1467]: vxlan.calico: Gained carrier May 15 12:54:57.126504 systemd[1]: Started sshd@19-172.232.9.197:22-139.178.89.65:55494.service - OpenSSH per-connection server daemon (139.178.89.65:55494). May 15 12:54:57.245659 systemd-networkd[1467]: calic52597fc9f3: Gained IPv6LL May 15 12:54:57.454773 sshd[5406]: Accepted publickey for core from 139.178.89.65 port 55494 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:57.456138 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:57.461855 systemd-logind[1520]: New session 17 of user core. May 15 12:54:57.465651 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 12:54:57.753156 sshd[5409]: Connection closed by 139.178.89.65 port 55494 May 15 12:54:57.754032 sshd-session[5406]: pam_unix(sshd:session): session closed for user core May 15 12:54:57.760383 systemd[1]: sshd@19-172.232.9.197:22-139.178.89.65:55494.service: Deactivated successfully. May 15 12:54:57.763863 systemd[1]: session-17.scope: Deactivated successfully. May 15 12:54:57.765765 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. May 15 12:54:57.767402 systemd-logind[1520]: Removed session 17. May 15 12:54:58.653843 systemd-networkd[1467]: vxlan.calico: Gained IPv6LL May 15 12:55:01.904875 containerd[1536]: time="2025-05-15T12:55:01.904825481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:55:01.905691 containerd[1536]: time="2025-05-15T12:55:01.905346821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 15 12:55:01.906868 containerd[1536]: time="2025-05-15T12:55:01.906237022Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:55:01.907959 containerd[1536]: time="2025-05-15T12:55:01.907938944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:55:01.908505 containerd[1536]: time="2025-05-15T12:55:01.908483534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 5.683895672s" May 15 12:55:01.908565 containerd[1536]: time="2025-05-15T12:55:01.908508895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 15 12:55:01.910779 containerd[1536]: time="2025-05-15T12:55:01.910756057Z" level=info msg="CreateContainer within sandbox \"1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 12:55:01.916479 containerd[1536]: time="2025-05-15T12:55:01.915914232Z" level=info msg="Container b2e83cb0b1c73f27e9671b8625b9cb073ed5e13ca96db4841e309595a8c02292: CDI devices from CRI Config.CDIDevices: []" May 15 12:55:01.927700 containerd[1536]: time="2025-05-15T12:55:01.927659425Z" level=info msg="CreateContainer within sandbox \"1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b2e83cb0b1c73f27e9671b8625b9cb073ed5e13ca96db4841e309595a8c02292\"" May 15 12:55:01.929436 containerd[1536]: time="2025-05-15T12:55:01.928553707Z" level=info msg="StartContainer for \"b2e83cb0b1c73f27e9671b8625b9cb073ed5e13ca96db4841e309595a8c02292\"" May 15 12:55:01.930180 containerd[1536]: time="2025-05-15T12:55:01.930151038Z" level=info msg="connecting to shim b2e83cb0b1c73f27e9671b8625b9cb073ed5e13ca96db4841e309595a8c02292" address="unix:///run/containerd/s/079e39ee3e77303eba473096e5fa577032b5abe9c86a1ac9bb4f573eed31cd98" protocol=ttrpc version=3 May 15 12:55:01.968633 systemd[1]: Started cri-containerd-b2e83cb0b1c73f27e9671b8625b9cb073ed5e13ca96db4841e309595a8c02292.scope - libcontainer container b2e83cb0b1c73f27e9671b8625b9cb073ed5e13ca96db4841e309595a8c02292. May 15 12:55:02.068420 containerd[1536]: time="2025-05-15T12:55:02.068376088Z" level=info msg="StartContainer for \"b2e83cb0b1c73f27e9671b8625b9cb073ed5e13ca96db4841e309595a8c02292\" returns successfully" May 15 12:55:02.070657 containerd[1536]: time="2025-05-15T12:55:02.070108310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 12:55:02.306436 kubelet[2701]: I0515 12:55:02.305601 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:55:02.306436 kubelet[2701]: I0515 12:55:02.305652 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:55:02.308009 kubelet[2701]: I0515 12:55:02.307925 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:55:02.310513 kubelet[2701]: I0515 12:55:02.310486 2701 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578" size=21998657 runtimeHandler="" May 15 12:55:02.310781 containerd[1536]: time="2025-05-15T12:55:02.310750131Z" level=info msg="RemoveImage \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 12:55:02.312054 containerd[1536]: time="2025-05-15T12:55:02.311982163Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.36.7\"" May 15 12:55:02.312631 containerd[1536]: time="2025-05-15T12:55:02.312587923Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\"" May 15 12:55:02.312980 containerd[1536]: time="2025-05-15T12:55:02.312958615Z" level=info msg="RemoveImage \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" returns successfully" May 15 12:55:02.313045 containerd[1536]: time="2025-05-15T12:55:02.313022855Z" level=info msg="ImageDelete event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 12:55:02.322186 kubelet[2701]: I0515 12:55:02.322169 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:55:02.322312 kubelet[2701]: I0515 12:55:02.322293 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-6f6b679f8f-svgwx","calico-system/calico-kube-controllers-6cff7bc5b6-sff94","kube-system/coredns-6f6b679f8f-hfrjd","calico-system/calico-typha-59b79bbb46-9qqgw","calico-system/calico-node-qxkgl","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197","calico-system/csi-node-driver-d7hrh"] May 15 12:55:02.322439 kubelet[2701]: E0515 12:55:02.322341 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:55:02.322439 kubelet[2701]: E0515 12:55:02.322354 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:55:02.322439 kubelet[2701]: E0515 12:55:02.322365 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:55:02.322439 kubelet[2701]: E0515 12:55:02.322376 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:55:02.322439 kubelet[2701]: E0515 12:55:02.322388 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:55:02.322439 kubelet[2701]: E0515 12:55:02.322430 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:55:02.322439 kubelet[2701]: E0515 12:55:02.322442 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhz8r" May 15 12:55:02.322732 kubelet[2701]: E0515 12:55:02.322451 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:55:02.322732 kubelet[2701]: E0515 12:55:02.322482 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:55:02.322732 kubelet[2701]: E0515 12:55:02.322511 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:55:02.322732 kubelet[2701]: I0515 12:55:02.322528 2701 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 12:55:02.407865 kubelet[2701]: E0515 12:55:02.407536 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:02.815321 systemd[1]: Started sshd@20-172.232.9.197:22-139.178.89.65:55502.service - OpenSSH per-connection server daemon (139.178.89.65:55502). May 15 12:55:03.159340 sshd[5469]: Accepted publickey for core from 139.178.89.65 port 55502 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:03.160908 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:03.166712 systemd-logind[1520]: New session 18 of user core. May 15 12:55:03.176773 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 12:55:03.407160 kubelet[2701]: E0515 12:55:03.407117 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:03.409674 kubelet[2701]: E0515 12:55:03.408363 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:03.409674 kubelet[2701]: E0515 12:55:03.408596 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:03.409755 containerd[1536]: time="2025-05-15T12:55:03.409380823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,}" May 15 12:55:03.410339 containerd[1536]: time="2025-05-15T12:55:03.410309154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,}" May 15 12:55:03.513479 sshd[5471]: Connection closed by 139.178.89.65 port 55502 May 15 12:55:03.515886 sshd-session[5469]: pam_unix(sshd:session): session closed for user core May 15 12:55:03.522589 systemd[1]: sshd@20-172.232.9.197:22-139.178.89.65:55502.service: Deactivated successfully. May 15 12:55:03.524416 systemd[1]: session-18.scope: Deactivated successfully. May 15 12:55:03.526289 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. May 15 12:55:03.529094 systemd-logind[1520]: Removed session 18. May 15 12:55:03.588779 systemd-networkd[1467]: cali27e4c9d5e04: Link UP May 15 12:55:03.589836 systemd-networkd[1467]: cali27e4c9d5e04: Gained carrier May 15 12:55:03.605051 containerd[1536]: 2025-05-15 12:55:03.490 [INFO][5479] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0 coredns-6f6b679f8f- kube-system a44ca807-6ed0-447f-9c4f-1a10e61b025b 717 0 2025-05-15 12:52:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-9-197 coredns-6f6b679f8f-svgwx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali27e4c9d5e04 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Namespace="kube-system" Pod="coredns-6f6b679f8f-svgwx" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-" May 15 12:55:03.605051 containerd[1536]: 2025-05-15 12:55:03.491 [INFO][5479] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Namespace="kube-system" Pod="coredns-6f6b679f8f-svgwx" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0" May 15 12:55:03.605051 containerd[1536]: 2025-05-15 12:55:03.535 [INFO][5505] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" HandleID="k8s-pod-network.74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Workload="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0" May 15 12:55:03.605199 containerd[1536]: 2025-05-15 12:55:03.550 [INFO][5505] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" HandleID="k8s-pod-network.74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Workload="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ba90), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-9-197", "pod":"coredns-6f6b679f8f-svgwx", "timestamp":"2025-05-15 12:55:03.53579668 +0000 UTC"}, Hostname:"172-232-9-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:55:03.605199 containerd[1536]: 2025-05-15 12:55:03.550 [INFO][5505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:55:03.605199 containerd[1536]: 2025-05-15 12:55:03.550 [INFO][5505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:55:03.605199 containerd[1536]: 2025-05-15 12:55:03.550 [INFO][5505] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-9-197' May 15 12:55:03.605199 containerd[1536]: 2025-05-15 12:55:03.554 [INFO][5505] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" host="172-232-9-197" May 15 12:55:03.605199 containerd[1536]: 2025-05-15 12:55:03.560 [INFO][5505] ipam/ipam.go 372: Looking up existing affinities for host host="172-232-9-197" May 15 12:55:03.605199 containerd[1536]: 2025-05-15 12:55:03.565 [INFO][5505] ipam/ipam.go 489: Trying affinity for 192.168.85.192/26 host="172-232-9-197" May 15 12:55:03.605199 containerd[1536]: 2025-05-15 12:55:03.567 [INFO][5505] ipam/ipam.go 155: Attempting to load block cidr=192.168.85.192/26 host="172-232-9-197" May 15 12:55:03.605199 containerd[1536]: 2025-05-15 12:55:03.569 [INFO][5505] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="172-232-9-197" May 15 12:55:03.605199 containerd[1536]: 2025-05-15 12:55:03.569 [INFO][5505] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" host="172-232-9-197" May 15 12:55:03.608269 containerd[1536]: 2025-05-15 12:55:03.571 [INFO][5505] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab May 15 12:55:03.608269 containerd[1536]: 2025-05-15 12:55:03.575 [INFO][5505] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" host="172-232-9-197" May 15 12:55:03.608269 containerd[1536]: 2025-05-15 12:55:03.582 [INFO][5505] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.85.194/26] block=192.168.85.192/26 handle="k8s-pod-network.74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" host="172-232-9-197" May 15 12:55:03.608269 containerd[1536]: 2025-05-15 12:55:03.582 [INFO][5505] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.85.194/26] handle="k8s-pod-network.74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" host="172-232-9-197" May 15 12:55:03.608269 containerd[1536]: 2025-05-15 12:55:03.582 [INFO][5505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:55:03.608269 containerd[1536]: 2025-05-15 12:55:03.583 [INFO][5505] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.194/26] IPv6=[] ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" HandleID="k8s-pod-network.74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Workload="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0" May 15 12:55:03.608384 containerd[1536]: 2025-05-15 12:55:03.585 [INFO][5479] cni-plugin/k8s.go 386: Populated endpoint ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Namespace="kube-system" Pod="coredns-6f6b679f8f-svgwx" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a44ca807-6ed0-447f-9c4f-1a10e61b025b", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-9-197", ContainerID:"", Pod:"coredns-6f6b679f8f-svgwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27e4c9d5e04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:55:03.610796 containerd[1536]: 2025-05-15 12:55:03.585 [INFO][5479] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.85.194/32] ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Namespace="kube-system" Pod="coredns-6f6b679f8f-svgwx" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0" May 15 12:55:03.610796 containerd[1536]: 2025-05-15 12:55:03.585 [INFO][5479] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27e4c9d5e04 ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Namespace="kube-system" Pod="coredns-6f6b679f8f-svgwx" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0" May 15 12:55:03.610796 containerd[1536]: 2025-05-15 12:55:03.588 [INFO][5479] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Namespace="kube-system" Pod="coredns-6f6b679f8f-svgwx" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0" May 15 12:55:03.610893 containerd[1536]: 2025-05-15 12:55:03.588 [INFO][5479] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Namespace="kube-system" Pod="coredns-6f6b679f8f-svgwx" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a44ca807-6ed0-447f-9c4f-1a10e61b025b", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-9-197", ContainerID:"74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab", Pod:"coredns-6f6b679f8f-svgwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27e4c9d5e04", MAC:"aa:b5:05:2a:4a:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:55:03.610893 containerd[1536]: 2025-05-15 12:55:03.598 [INFO][5479] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" Namespace="kube-system" Pod="coredns-6f6b679f8f-svgwx" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--svgwx-eth0" May 15 12:55:03.640559 containerd[1536]: time="2025-05-15T12:55:03.640479073Z" level=info msg="connecting to shim 74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab" address="unix:///run/containerd/s/6f38fe7fba66a611b307a5c0d7ea7f1a93f5f62e8b4658e0f4b60c7cf924765d" namespace=k8s.io protocol=ttrpc version=3 May 15 12:55:03.665589 systemd[1]: Started cri-containerd-74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab.scope - libcontainer container 74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab. May 15 12:55:03.698282 systemd-networkd[1467]: calif3d7613b42a: Link UP May 15 12:55:03.699299 systemd-networkd[1467]: calif3d7613b42a: Gained carrier May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.495 [INFO][5480] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0 coredns-6f6b679f8f- kube-system be5ba310-2fcd-4763-b9cf-8dba85ce0f76 714 0 2025-05-15 12:52:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-9-197 coredns-6f6b679f8f-hfrjd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif3d7613b42a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfrjd" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.496 [INFO][5480] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfrjd" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.549 [INFO][5510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" HandleID="k8s-pod-network.8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Workload="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.560 [INFO][5510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" HandleID="k8s-pod-network.8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Workload="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001030e0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-9-197", "pod":"coredns-6f6b679f8f-hfrjd", "timestamp":"2025-05-15 12:55:03.549709335 +0000 UTC"}, Hostname:"172-232-9-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.560 [INFO][5510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.582 [INFO][5510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.583 [INFO][5510] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-9-197' May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.654 [INFO][5510] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" host="172-232-9-197" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.663 [INFO][5510] ipam/ipam.go 372: Looking up existing affinities for host host="172-232-9-197" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.671 [INFO][5510] ipam/ipam.go 489: Trying affinity for 192.168.85.192/26 host="172-232-9-197" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.673 [INFO][5510] ipam/ipam.go 155: Attempting to load block cidr=192.168.85.192/26 host="172-232-9-197" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.676 [INFO][5510] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="172-232-9-197" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.676 [INFO][5510] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" host="172-232-9-197" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.677 [INFO][5510] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12 May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.682 [INFO][5510] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" host="172-232-9-197" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.690 [INFO][5510] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.85.195/26] block=192.168.85.192/26 handle="k8s-pod-network.8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" host="172-232-9-197" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.690 [INFO][5510] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.85.195/26] handle="k8s-pod-network.8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" host="172-232-9-197" May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.690 [INFO][5510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:55:03.723474 containerd[1536]: 2025-05-15 12:55:03.690 [INFO][5510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.195/26] IPv6=[] ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" HandleID="k8s-pod-network.8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Workload="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0" May 15 12:55:03.724186 containerd[1536]: 2025-05-15 12:55:03.694 [INFO][5480] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfrjd" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"be5ba310-2fcd-4763-b9cf-8dba85ce0f76", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-9-197", ContainerID:"", Pod:"coredns-6f6b679f8f-hfrjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d7613b42a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:55:03.724186 containerd[1536]: 2025-05-15 12:55:03.694 [INFO][5480] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.85.195/32] ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfrjd" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0" May 15 12:55:03.724186 containerd[1536]: 2025-05-15 12:55:03.694 [INFO][5480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3d7613b42a ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfrjd" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0" May 15 12:55:03.724186 containerd[1536]: 2025-05-15 12:55:03.698 [INFO][5480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfrjd" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0" May 15 12:55:03.724186 containerd[1536]: 2025-05-15 12:55:03.699 [INFO][5480] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfrjd" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"be5ba310-2fcd-4763-b9cf-8dba85ce0f76", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-9-197", ContainerID:"8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12", Pod:"coredns-6f6b679f8f-hfrjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d7613b42a", MAC:"a2:6e:dc:5a:de:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:55:03.724186 containerd[1536]: 2025-05-15 12:55:03.714 [INFO][5480] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfrjd" WorkloadEndpoint="172--232--9--197-k8s-coredns--6f6b679f8f--hfrjd-eth0" May 15 12:55:03.790088 containerd[1536]: time="2025-05-15T12:55:03.790044855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-svgwx,Uid:a44ca807-6ed0-447f-9c4f-1a10e61b025b,Namespace:kube-system,Attempt:0,} returns sandbox id \"74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab\"" May 15 12:55:03.791918 kubelet[2701]: E0515 12:55:03.791887 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:03.793962 containerd[1536]: time="2025-05-15T12:55:03.793925349Z" level=info msg="connecting to shim 8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12" address="unix:///run/containerd/s/29fe40ff15103665651c715205bf5bb7304621abd85ea6628b4ec420b2323025" namespace=k8s.io protocol=ttrpc version=3 May 15 12:55:03.831618 systemd[1]: Started cri-containerd-8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12.scope - libcontainer container 8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12. May 15 12:55:03.888085 containerd[1536]: time="2025-05-15T12:55:03.888050571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfrjd,Uid:be5ba310-2fcd-4763-b9cf-8dba85ce0f76,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12\"" May 15 12:55:03.889377 kubelet[2701]: E0515 12:55:03.889350 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:05.373745 systemd-networkd[1467]: calif3d7613b42a: Gained IPv6LL May 15 12:55:05.408391 containerd[1536]: time="2025-05-15T12:55:05.408107001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,}" May 15 12:55:05.502629 systemd-networkd[1467]: cali27e4c9d5e04: Gained IPv6LL May 15 12:55:05.558116 systemd-networkd[1467]: cali63fe9764c25: Link UP May 15 12:55:05.558431 systemd-networkd[1467]: cali63fe9764c25: Gained carrier May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.467 [INFO][5638] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0 calico-kube-controllers-6cff7bc5b6- calico-system 9345d3c6-0d7e-4691-8b9a-ecd0176d0441 718 0 2025-05-15 12:52:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cff7bc5b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-9-197 calico-kube-controllers-6cff7bc5b6-sff94 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali63fe9764c25 [] []}} ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Namespace="calico-system" Pod="calico-kube-controllers-6cff7bc5b6-sff94" WorkloadEndpoint="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.467 [INFO][5638] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Namespace="calico-system" Pod="calico-kube-controllers-6cff7bc5b6-sff94" WorkloadEndpoint="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.505 [INFO][5651] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" HandleID="k8s-pod-network.537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Workload="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.515 [INFO][5651] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" HandleID="k8s-pod-network.537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Workload="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031bab0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-9-197", "pod":"calico-kube-controllers-6cff7bc5b6-sff94", "timestamp":"2025-05-15 12:55:05.505087495 +0000 UTC"}, Hostname:"172-232-9-197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.515 [INFO][5651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.515 [INFO][5651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.515 [INFO][5651] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-9-197' May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.518 [INFO][5651] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" host="172-232-9-197" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.525 [INFO][5651] ipam/ipam.go 372: Looking up existing affinities for host host="172-232-9-197" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.530 [INFO][5651] ipam/ipam.go 489: Trying affinity for 192.168.85.192/26 host="172-232-9-197" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.532 [INFO][5651] ipam/ipam.go 155: Attempting to load block cidr=192.168.85.192/26 host="172-232-9-197" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.535 [INFO][5651] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.85.192/26 host="172-232-9-197" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.535 [INFO][5651] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.85.192/26 handle="k8s-pod-network.537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" host="172-232-9-197" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.536 [INFO][5651] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077 May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.541 [INFO][5651] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.85.192/26 handle="k8s-pod-network.537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" host="172-232-9-197" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.550 [INFO][5651] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.85.196/26] block=192.168.85.192/26 handle="k8s-pod-network.537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" host="172-232-9-197" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.550 [INFO][5651] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.85.196/26] handle="k8s-pod-network.537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" host="172-232-9-197" May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.550 [INFO][5651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:55:05.583885 containerd[1536]: 2025-05-15 12:55:05.550 [INFO][5651] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.196/26] IPv6=[] ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" HandleID="k8s-pod-network.537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Workload="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0" May 15 12:55:05.585552 containerd[1536]: 2025-05-15 12:55:05.553 [INFO][5638] cni-plugin/k8s.go 386: Populated endpoint ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Namespace="calico-system" Pod="calico-kube-controllers-6cff7bc5b6-sff94" WorkloadEndpoint="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0", GenerateName:"calico-kube-controllers-6cff7bc5b6-", Namespace:"calico-system", SelfLink:"", UID:"9345d3c6-0d7e-4691-8b9a-ecd0176d0441", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cff7bc5b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-9-197", ContainerID:"", Pod:"calico-kube-controllers-6cff7bc5b6-sff94", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali63fe9764c25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:55:05.585552 containerd[1536]: 2025-05-15 12:55:05.553 [INFO][5638] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.85.196/32] ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Namespace="calico-system" Pod="calico-kube-controllers-6cff7bc5b6-sff94" WorkloadEndpoint="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0" May 15 12:55:05.585552 containerd[1536]: 2025-05-15 12:55:05.554 [INFO][5638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63fe9764c25 ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Namespace="calico-system" Pod="calico-kube-controllers-6cff7bc5b6-sff94" WorkloadEndpoint="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0" May 15 12:55:05.585552 containerd[1536]: 2025-05-15 12:55:05.559 [INFO][5638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Namespace="calico-system" Pod="calico-kube-controllers-6cff7bc5b6-sff94" WorkloadEndpoint="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0" May 15 12:55:05.585552 containerd[1536]: 2025-05-15 12:55:05.561 [INFO][5638] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Namespace="calico-system" Pod="calico-kube-controllers-6cff7bc5b6-sff94" WorkloadEndpoint="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0", GenerateName:"calico-kube-controllers-6cff7bc5b6-", Namespace:"calico-system", SelfLink:"", UID:"9345d3c6-0d7e-4691-8b9a-ecd0176d0441", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cff7bc5b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-9-197", ContainerID:"537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077", Pod:"calico-kube-controllers-6cff7bc5b6-sff94", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali63fe9764c25", MAC:"0e:61:23:2f:12:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:55:05.585552 containerd[1536]: 2025-05-15 12:55:05.580 [INFO][5638] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" Namespace="calico-system" Pod="calico-kube-controllers-6cff7bc5b6-sff94" WorkloadEndpoint="172--232--9--197-k8s-calico--kube--controllers--6cff7bc5b6--sff94-eth0" May 15 12:55:05.632754 containerd[1536]: time="2025-05-15T12:55:05.632118950Z" level=info msg="connecting to shim 537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077" address="unix:///run/containerd/s/fb21a4bc4278c0126f9787c7290739190e1a919664b545b2efc688d2c5ca6c4d" namespace=k8s.io protocol=ttrpc version=3 May 15 12:55:05.675782 systemd[1]: Started cri-containerd-537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077.scope - libcontainer container 537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077. May 15 12:55:05.727189 containerd[1536]: time="2025-05-15T12:55:05.727145492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cff7bc5b6-sff94,Uid:9345d3c6-0d7e-4691-8b9a-ecd0176d0441,Namespace:calico-system,Attempt:0,} returns sandbox id \"537221d368fcf3bb2bb1536fa8dbb99e4be8b02bab51e7e52acfea981bbc6077\"" May 15 12:55:06.909643 systemd-networkd[1467]: cali63fe9764c25: Gained IPv6LL May 15 12:55:08.578379 systemd[1]: Started sshd@21-172.232.9.197:22-139.178.89.65:43854.service - OpenSSH per-connection server daemon (139.178.89.65:43854). May 15 12:55:08.643793 containerd[1536]: time="2025-05-15T12:55:08.643749293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:55:08.644391 containerd[1536]: time="2025-05-15T12:55:08.644362543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 15 12:55:08.645044 containerd[1536]: time="2025-05-15T12:55:08.645013274Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:55:08.646796 containerd[1536]: time="2025-05-15T12:55:08.646537826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:55:08.647381 containerd[1536]: time="2025-05-15T12:55:08.647353257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 6.577211807s" May 15 12:55:08.647489 containerd[1536]: time="2025-05-15T12:55:08.647448667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 15 12:55:08.648992 containerd[1536]: time="2025-05-15T12:55:08.648963158Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 12:55:08.650340 containerd[1536]: time="2025-05-15T12:55:08.650316450Z" level=info msg="CreateContainer within sandbox \"1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 12:55:08.656475 containerd[1536]: time="2025-05-15T12:55:08.656206136Z" level=info msg="Container e7c4ca4e4166cec8f3ec375421202ed9a0ca7a3e8652ede650a31d97de42dca9: CDI devices from CRI Config.CDIDevices: []" May 15 12:55:08.680778 containerd[1536]: time="2025-05-15T12:55:08.680744692Z" level=info msg="CreateContainer within sandbox \"1c991a4ed6f94db5308513a763b1c7b33d11edaa8f9e7ea23105182d9ec7bf31\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e7c4ca4e4166cec8f3ec375421202ed9a0ca7a3e8652ede650a31d97de42dca9\"" May 15 12:55:08.681190 containerd[1536]: time="2025-05-15T12:55:08.681160922Z" level=info msg="StartContainer for \"e7c4ca4e4166cec8f3ec375421202ed9a0ca7a3e8652ede650a31d97de42dca9\"" May 15 12:55:08.682266 containerd[1536]: time="2025-05-15T12:55:08.682236333Z" level=info msg="connecting to shim e7c4ca4e4166cec8f3ec375421202ed9a0ca7a3e8652ede650a31d97de42dca9" address="unix:///run/containerd/s/079e39ee3e77303eba473096e5fa577032b5abe9c86a1ac9bb4f573eed31cd98" protocol=ttrpc version=3 May 15 12:55:08.710754 systemd[1]: Started cri-containerd-e7c4ca4e4166cec8f3ec375421202ed9a0ca7a3e8652ede650a31d97de42dca9.scope - libcontainer container e7c4ca4e4166cec8f3ec375421202ed9a0ca7a3e8652ede650a31d97de42dca9. May 15 12:55:08.765433 containerd[1536]: time="2025-05-15T12:55:08.765365261Z" level=info msg="StartContainer for \"e7c4ca4e4166cec8f3ec375421202ed9a0ca7a3e8652ede650a31d97de42dca9\" returns successfully" May 15 12:55:08.894938 kubelet[2701]: I0515 12:55:08.894641 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d7hrh" podStartSLOduration=146.469182559 podStartE2EDuration="2m38.894621787s" podCreationTimestamp="2025-05-15 12:52:30 +0000 UTC" firstStartedPulling="2025-05-15 12:54:56.2232797 +0000 UTC m=+157.917189962" lastFinishedPulling="2025-05-15 12:55:08.648718928 +0000 UTC m=+170.342629190" observedRunningTime="2025-05-15 12:55:08.886616748 +0000 UTC m=+170.580527040" watchObservedRunningTime="2025-05-15 12:55:08.894621787 +0000 UTC m=+170.588532049" May 15 12:55:08.919640 sshd[5726]: Accepted publickey for core from 139.178.89.65 port 43854 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:08.921101 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:08.927203 systemd-logind[1520]: New session 19 of user core. May 15 12:55:08.932789 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 12:55:09.235267 sshd[5759]: Connection closed by 139.178.89.65 port 43854 May 15 12:55:09.236452 sshd-session[5726]: pam_unix(sshd:session): session closed for user core May 15 12:55:09.241774 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. May 15 12:55:09.241983 systemd[1]: sshd@21-172.232.9.197:22-139.178.89.65:43854.service: Deactivated successfully. May 15 12:55:09.244447 systemd[1]: session-19.scope: Deactivated successfully. May 15 12:55:09.247188 systemd-logind[1520]: Removed session 19. May 15 12:55:09.535971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1286415510.mount: Deactivated successfully. May 15 12:55:09.582132 kubelet[2701]: I0515 12:55:09.582099 2701 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 12:55:09.582132 kubelet[2701]: I0515 12:55:09.582128 2701 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 12:55:10.832103 containerd[1536]: time="2025-05-15T12:55:10.832058803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:55:10.833263 containerd[1536]: time="2025-05-15T12:55:10.833231645Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 12:55:10.834486 containerd[1536]: time="2025-05-15T12:55:10.833634335Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:55:10.835528 containerd[1536]: time="2025-05-15T12:55:10.835507777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:55:10.837064 containerd[1536]: time="2025-05-15T12:55:10.837039659Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.188045931s" May 15 12:55:10.837121 containerd[1536]: time="2025-05-15T12:55:10.837065879Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 12:55:10.838533 containerd[1536]: time="2025-05-15T12:55:10.838213640Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 12:55:10.840530 containerd[1536]: time="2025-05-15T12:55:10.840508272Z" level=info msg="CreateContainer within sandbox \"74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:55:10.848103 containerd[1536]: time="2025-05-15T12:55:10.848060690Z" level=info msg="Container fe37fac4b606c76506971c3dd1bd42951b0c44c48f5283a83ab0ce00c827077c: CDI devices from CRI Config.CDIDevices: []" May 15 12:55:10.855634 containerd[1536]: time="2025-05-15T12:55:10.855600568Z" level=info msg="CreateContainer within sandbox \"74e0a7bc3e706533b93b1e586ff89534454a6e759a7cf9a547f52c3d11776bab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe37fac4b606c76506971c3dd1bd42951b0c44c48f5283a83ab0ce00c827077c\"" May 15 12:55:10.856169 containerd[1536]: time="2025-05-15T12:55:10.855977518Z" level=info msg="StartContainer for \"fe37fac4b606c76506971c3dd1bd42951b0c44c48f5283a83ab0ce00c827077c\"" May 15 12:55:10.858718 containerd[1536]: time="2025-05-15T12:55:10.858698222Z" level=info msg="connecting to shim fe37fac4b606c76506971c3dd1bd42951b0c44c48f5283a83ab0ce00c827077c" address="unix:///run/containerd/s/6f38fe7fba66a611b307a5c0d7ea7f1a93f5f62e8b4658e0f4b60c7cf924765d" protocol=ttrpc version=3 May 15 12:55:10.882615 systemd[1]: Started cri-containerd-fe37fac4b606c76506971c3dd1bd42951b0c44c48f5283a83ab0ce00c827077c.scope - libcontainer container fe37fac4b606c76506971c3dd1bd42951b0c44c48f5283a83ab0ce00c827077c. May 15 12:55:10.918594 containerd[1536]: time="2025-05-15T12:55:10.918273563Z" level=info msg="StartContainer for \"fe37fac4b606c76506971c3dd1bd42951b0c44c48f5283a83ab0ce00c827077c\" returns successfully" May 15 12:55:11.028489 containerd[1536]: time="2025-05-15T12:55:11.026633456Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:55:11.028489 containerd[1536]: time="2025-05-15T12:55:11.027266748Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=0" May 15 12:55:11.030740 containerd[1536]: time="2025-05-15T12:55:11.030685181Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 192.445071ms" May 15 12:55:11.030740 containerd[1536]: time="2025-05-15T12:55:11.030717881Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 12:55:11.031975 containerd[1536]: time="2025-05-15T12:55:11.031927022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 12:55:11.036158 containerd[1536]: time="2025-05-15T12:55:11.035394326Z" level=info msg="CreateContainer within sandbox \"8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:55:11.043642 containerd[1536]: time="2025-05-15T12:55:11.043604364Z" level=info msg="Container 75d3d0c281ebd91d772943f9ecaacb4fbb38c5ec0dc54e74d546129c1773537c: CDI devices from CRI Config.CDIDevices: []" May 15 12:55:11.054147 containerd[1536]: time="2025-05-15T12:55:11.054106715Z" level=info msg="CreateContainer within sandbox \"8ce17fe5317bf4f0b4a415e956461662df3ae8c719c91b2cbb1d040743e7ff12\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75d3d0c281ebd91d772943f9ecaacb4fbb38c5ec0dc54e74d546129c1773537c\"" May 15 12:55:11.055337 containerd[1536]: time="2025-05-15T12:55:11.055301817Z" level=info msg="StartContainer for \"75d3d0c281ebd91d772943f9ecaacb4fbb38c5ec0dc54e74d546129c1773537c\"" May 15 12:55:11.056764 containerd[1536]: time="2025-05-15T12:55:11.056729368Z" level=info msg="connecting to shim 75d3d0c281ebd91d772943f9ecaacb4fbb38c5ec0dc54e74d546129c1773537c" address="unix:///run/containerd/s/29fe40ff15103665651c715205bf5bb7304621abd85ea6628b4ec420b2323025" protocol=ttrpc version=3 May 15 12:55:11.079784 systemd[1]: Started cri-containerd-75d3d0c281ebd91d772943f9ecaacb4fbb38c5ec0dc54e74d546129c1773537c.scope - libcontainer container 75d3d0c281ebd91d772943f9ecaacb4fbb38c5ec0dc54e74d546129c1773537c. May 15 12:55:11.148636 containerd[1536]: time="2025-05-15T12:55:11.147667093Z" level=info msg="StartContainer for \"75d3d0c281ebd91d772943f9ecaacb4fbb38c5ec0dc54e74d546129c1773537c\" returns successfully" May 15 12:55:11.886266 kubelet[2701]: E0515 12:55:11.886223 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:11.889325 kubelet[2701]: E0515 12:55:11.889297 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:11.901914 kubelet[2701]: I0515 12:55:11.901870 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hfrjd" podStartSLOduration=160.760903687 podStartE2EDuration="2m47.901858586s" podCreationTimestamp="2025-05-15 12:52:24 +0000 UTC" firstStartedPulling="2025-05-15 12:55:03.890653013 +0000 UTC m=+165.584563275" lastFinishedPulling="2025-05-15 12:55:11.031607912 +0000 UTC m=+172.725518174" observedRunningTime="2025-05-15 12:55:11.897928092 +0000 UTC m=+173.591838354" watchObservedRunningTime="2025-05-15 12:55:11.901858586 +0000 UTC m=+173.595768848" May 15 12:55:11.961494 kubelet[2701]: I0515 12:55:11.961149 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-svgwx" podStartSLOduration=160.918026078 podStartE2EDuration="2m47.960940047s" podCreationTimestamp="2025-05-15 12:52:24 +0000 UTC" firstStartedPulling="2025-05-15 12:55:03.7948625 +0000 UTC m=+165.488772762" lastFinishedPulling="2025-05-15 12:55:10.837776459 +0000 UTC m=+172.531686731" observedRunningTime="2025-05-15 12:55:11.941057936 +0000 UTC m=+173.634968228" watchObservedRunningTime="2025-05-15 12:55:11.960940047 +0000 UTC m=+173.654850309" May 15 12:55:12.349868 kubelet[2701]: I0515 12:55:12.349837 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:55:12.349868 kubelet[2701]: I0515 12:55:12.349872 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:55:12.350331 kubelet[2701]: I0515 12:55:12.350054 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6cff7bc5b6-sff94","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/coredns-6f6b679f8f-hfrjd","kube-system/coredns-6f6b679f8f-svgwx","calico-system/calico-node-qxkgl","kube-system/kube-controller-manager-172-232-9-197","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","calico-system/csi-node-driver-d7hrh","kube-system/kube-scheduler-172-232-9-197"] May 15 12:55:12.350331 kubelet[2701]: E0515 12:55:12.350089 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:55:12.350331 kubelet[2701]: E0515 12:55:12.350102 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:55:12.350331 kubelet[2701]: E0515 12:55:12.350112 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:55:12.350331 kubelet[2701]: E0515 12:55:12.350120 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:55:12.350331 kubelet[2701]: E0515 12:55:12.350128 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:55:12.350331 kubelet[2701]: E0515 12:55:12.350137 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:55:12.350331 kubelet[2701]: E0515 12:55:12.350145 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhz8r" May 15 12:55:12.350331 kubelet[2701]: E0515 12:55:12.350152 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:55:12.350331 kubelet[2701]: E0515 12:55:12.350163 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:55:12.350331 kubelet[2701]: E0515 12:55:12.350170 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:55:12.350331 kubelet[2701]: I0515 12:55:12.350179 2701 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 12:55:12.891729 kubelet[2701]: E0515 12:55:12.891647 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:12.892426 kubelet[2701]: E0515 12:55:12.892308 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:13.894166 kubelet[2701]: E0515 12:55:13.894122 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:13.894597 kubelet[2701]: E0515 12:55:13.894510 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:14.310999 systemd[1]: Started sshd@22-172.232.9.197:22-139.178.89.65:43858.service - OpenSSH per-connection server daemon (139.178.89.65:43858). May 15 12:55:14.656827 sshd[5898]: Accepted publickey for core from 139.178.89.65 port 43858 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:14.658881 sshd-session[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:14.665256 systemd-logind[1520]: New session 20 of user core. May 15 12:55:14.668777 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 12:55:14.969161 sshd[5900]: Connection closed by 139.178.89.65 port 43858 May 15 12:55:14.969712 sshd-session[5898]: pam_unix(sshd:session): session closed for user core May 15 12:55:14.975213 systemd[1]: sshd@22-172.232.9.197:22-139.178.89.65:43858.service: Deactivated successfully. May 15 12:55:14.978071 systemd[1]: session-20.scope: Deactivated successfully. May 15 12:55:14.979936 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. May 15 12:55:14.981147 systemd-logind[1520]: Removed session 20. May 15 12:55:18.041915 containerd[1536]: time="2025-05-15T12:55:18.041868792Z" level=error msg="failed to cleanup \"extract-653028840-80Vc sha256:b3780a5f3330c62bddaf1597bd34a37b8e3d892f0c36506cfd7180dbeb567bf6\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" May 15 12:55:18.042598 containerd[1536]: time="2025-05-15T12:55:18.042496752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/91/fs/usr/bin/kube-controllers: no space left on device" May 15 12:55:18.042598 containerd[1536]: time="2025-05-15T12:55:18.042559782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 12:55:18.042990 kubelet[2701]: E0515 12:55:18.042942 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/91/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 12:55:18.043253 kubelet[2701]: E0515 12:55:18.042997 2701 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/91/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.29.3" May 15 12:55:18.043807 kubelet[2701]: E0515 12:55:18.043121 2701 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,ValueFrom:nil,},EnvVar{Name:FIPS_MODE_ENABLED,Value:false,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdhfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6cff7bc5b6-sff94_calico-system(9345d3c6-0d7e-4691-8b9a-ecd0176d0441): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/91/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" May 15 12:55:18.045557 kubelet[2701]: E0515 12:55:18.045486 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\": failed to extract layer sha256:9ad9e3f4f50f7d9fe222699b04d43c08f22ca43bdb7e52c69c3beb9a90a5ce1e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/91/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:55:18.912108 kubelet[2701]: E0515 12:55:18.911869 2701 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\\\"\"" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" podUID="9345d3c6-0d7e-4691-8b9a-ecd0176d0441" May 15 12:55:20.034699 systemd[1]: Started sshd@23-172.232.9.197:22-139.178.89.65:49766.service - OpenSSH per-connection server daemon (139.178.89.65:49766). May 15 12:55:20.394684 sshd[5935]: Accepted publickey for core from 139.178.89.65 port 49766 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:20.395626 sshd-session[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:20.401321 systemd-logind[1520]: New session 21 of user core. May 15 12:55:20.408491 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 12:55:20.701933 sshd[5937]: Connection closed by 139.178.89.65 port 49766 May 15 12:55:20.703026 sshd-session[5935]: pam_unix(sshd:session): session closed for user core May 15 12:55:20.708868 systemd[1]: sshd@23-172.232.9.197:22-139.178.89.65:49766.service: Deactivated successfully. May 15 12:55:20.713334 systemd[1]: session-21.scope: Deactivated successfully. May 15 12:55:20.715442 systemd-logind[1520]: Session 21 logged out. Waiting for processes to exit. May 15 12:55:20.717640 systemd-logind[1520]: Removed session 21. May 15 12:55:20.766516 systemd[1]: Started sshd@24-172.232.9.197:22-139.178.89.65:49776.service - OpenSSH per-connection server daemon (139.178.89.65:49776). May 15 12:55:21.104980 sshd[5949]: Accepted publickey for core from 139.178.89.65 port 49776 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:21.106437 sshd-session[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:21.113869 systemd-logind[1520]: New session 22 of user core. May 15 12:55:21.120834 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 12:55:21.404545 sshd[5951]: Connection closed by 139.178.89.65 port 49776 May 15 12:55:21.406560 sshd-session[5949]: pam_unix(sshd:session): session closed for user core May 15 12:55:21.412001 systemd-logind[1520]: Session 22 logged out. Waiting for processes to exit. May 15 12:55:21.413838 systemd[1]: sshd@24-172.232.9.197:22-139.178.89.65:49776.service: Deactivated successfully. May 15 12:55:21.416454 systemd[1]: session-22.scope: Deactivated successfully. May 15 12:55:21.418411 systemd-logind[1520]: Removed session 22. May 15 12:55:22.384125 kubelet[2701]: I0515 12:55:22.384083 2701 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 15 12:55:22.384125 kubelet[2701]: I0515 12:55:22.384128 2701 container_gc.go:88] "Attempting to delete unused containers" May 15 12:55:22.386090 kubelet[2701]: I0515 12:55:22.386073 2701 image_gc_manager.go:431] "Attempting to delete unused images" May 15 12:55:22.434064 kubelet[2701]: I0515 12:55:22.434032 2701 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 15 12:55:22.434225 kubelet[2701]: I0515 12:55:22.434191 2701 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-6cff7bc5b6-sff94","calico-system/calico-typha-59b79bbb46-9qqgw","kube-system/coredns-6f6b679f8f-hfrjd","kube-system/coredns-6f6b679f8f-svgwx","calico-system/calico-node-qxkgl","kube-system/kube-controller-manager-172-232-9-197","calico-system/csi-node-driver-d7hrh","kube-system/kube-proxy-rhz8r","kube-system/kube-apiserver-172-232-9-197","kube-system/kube-scheduler-172-232-9-197"] May 15 12:55:22.434225 kubelet[2701]: E0515 12:55:22.434220 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-6cff7bc5b6-sff94" May 15 12:55:22.434328 kubelet[2701]: E0515 12:55:22.434235 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-59b79bbb46-9qqgw" May 15 12:55:22.434328 kubelet[2701]: E0515 12:55:22.434244 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-hfrjd" May 15 12:55:22.434328 kubelet[2701]: E0515 12:55:22.434252 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-6f6b679f8f-svgwx" May 15 12:55:22.434328 kubelet[2701]: E0515 12:55:22.434259 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qxkgl" May 15 12:55:22.434328 kubelet[2701]: E0515 12:55:22.434266 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-9-197" May 15 12:55:22.434328 kubelet[2701]: E0515 12:55:22.434277 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-d7hrh" May 15 12:55:22.434328 kubelet[2701]: E0515 12:55:22.434285 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhz8r" May 15 12:55:22.434328 kubelet[2701]: E0515 12:55:22.434294 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-9-197" May 15 12:55:22.434328 kubelet[2701]: E0515 12:55:22.434302 2701 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-9-197" May 15 12:55:22.434328 kubelet[2701]: I0515 12:55:22.434326 2701 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 15 12:55:23.407230 kubelet[2701]: E0515 12:55:23.407103 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 15 12:55:25.724487 containerd[1536]: time="2025-05-15T12:55:25.724305099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ba8df478ca1ec77242c961eab74e13c0924aa16d506c3941b2d18a7d8dfee92\" id:\"9f33f9db5f905c3ec7ce23ac183e66c368c32912b710cffd799509af0f27417f\" pid:5974 exited_at:{seconds:1747313725 nanos:724012508}" May 15 12:55:25.728726 kubelet[2701]: E0515 12:55:25.728578 2701 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18"