Aug 13 01:11:49.075172 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:11:49.075220 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:11:49.075230 kernel: BIOS-provided physical RAM map: Aug 13 01:11:49.075242 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:11:49.075248 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:11:49.075254 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:11:49.075261 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:11:49.075267 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:11:49.075273 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:11:49.075279 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:11:49.075285 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:11:49.075291 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:11:49.075300 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:11:49.075306 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:11:49.075313 kernel: NX (Execute Disable) protection: active Aug 13 01:11:49.075320 kernel: APIC: Static calls initialized Aug 13 01:11:49.075326 kernel: SMBIOS 2.8 present. Aug 13 01:11:49.075335 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:11:49.075341 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:11:49.075348 kernel: Hypervisor detected: KVM Aug 13 01:11:49.075354 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:11:49.075361 kernel: kvm-clock: using sched offset of 8385694950 cycles Aug 13 01:11:49.075368 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:11:49.075375 kernel: tsc: Detected 1999.996 MHz processor Aug 13 01:11:49.075382 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:11:49.075389 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:11:49.075396 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:11:49.075405 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:11:49.075412 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:11:49.075419 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:11:49.075426 kernel: Using GB pages for direct mapping Aug 13 01:11:49.075432 kernel: ACPI: Early table checksum verification disabled Aug 13 01:11:49.075439 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:11:49.075446 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:49.075453 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:49.075459 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:49.075468 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:11:49.075475 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:49.080066 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:49.080077 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:49.080091 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:49.080099 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:11:49.080109 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:11:49.080116 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:11:49.080123 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:11:49.080130 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:11:49.080138 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:11:49.080145 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:11:49.080152 kernel: No NUMA configuration found Aug 13 01:11:49.080162 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:11:49.080169 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Aug 13 01:11:49.080177 kernel: Zone ranges: Aug 13 01:11:49.080184 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:11:49.080191 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:11:49.080198 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:11:49.080205 kernel: Device empty Aug 13 01:11:49.080212 kernel: Movable zone start for each node Aug 13 01:11:49.080219 kernel: Early memory node ranges Aug 13 01:11:49.080226 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:11:49.080236 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:11:49.080243 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:11:49.080250 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:11:49.080257 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:11:49.080264 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:11:49.080271 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:11:49.080278 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:11:49.080286 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:11:49.080293 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:11:49.080302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:11:49.080309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:11:49.080316 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:11:49.080323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:11:49.080330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:11:49.080338 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:11:49.080345 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:11:49.080352 kernel: TSC deadline timer available Aug 13 01:11:49.080359 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:11:49.080368 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:11:49.080375 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:11:49.080382 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:11:49.080389 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:11:49.080396 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:11:49.080403 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:11:49.080410 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:11:49.080417 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:11:49.080424 kernel: kvm-guest: setup PV sched yield Aug 13 01:11:49.080433 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:11:49.080440 kernel: Booting paravirtualized kernel on KVM Aug 13 01:11:49.080448 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:11:49.080455 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:11:49.080462 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:11:49.080469 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:11:49.080477 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:11:49.080483 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:11:49.080491 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:11:49.080501 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:11:49.080509 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:11:49.080516 kernel: random: crng init done Aug 13 01:11:49.080523 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:11:49.080530 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:11:49.080540 kernel: Fallback order for Node 0: 0 Aug 13 01:11:49.080552 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:11:49.080559 kernel: Policy zone: Normal Aug 13 01:11:49.080568 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:11:49.080575 kernel: software IO TLB: area num 2. Aug 13 01:11:49.080582 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:11:49.080590 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:11:49.080596 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:11:49.080604 kernel: Dynamic Preempt: voluntary Aug 13 01:11:49.080611 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:11:49.080620 kernel: rcu: RCU event tracing is enabled. Aug 13 01:11:49.080627 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:11:49.080637 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:11:49.080644 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:11:49.080651 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:11:49.080658 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:11:49.080665 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:11:49.080672 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:11:49.080686 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:11:49.080696 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:11:49.080703 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:11:49.080710 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:11:49.080718 kernel: Console: colour VGA+ 80x25 Aug 13 01:11:49.080725 kernel: printk: legacy console [tty0] enabled Aug 13 01:11:49.080734 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:11:49.080741 kernel: ACPI: Core revision 20240827 Aug 13 01:11:49.080749 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:11:49.080756 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:11:49.080764 kernel: x2apic enabled Aug 13 01:11:49.080773 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:11:49.080781 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:11:49.080788 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:11:49.080795 kernel: kvm-guest: setup PV IPIs Aug 13 01:11:49.081154 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:11:49.081992 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8554e05d, max_idle_ns: 881590540420 ns Aug 13 01:11:49.082033 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999996) Aug 13 01:11:49.082050 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:11:49.082059 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:11:49.082071 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:11:49.082079 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:11:49.082086 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:11:49.082094 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:11:49.082101 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:11:49.082109 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:11:49.082117 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:11:49.082124 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:11:49.082135 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:11:49.082143 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:11:49.082150 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:11:49.082158 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:11:49.082165 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:11:49.082172 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:11:49.082180 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:11:49.082187 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:11:49.082195 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:11:49.082204 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:11:49.082212 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:11:49.082219 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:11:49.082226 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:11:49.082234 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:11:49.082242 kernel: landlock: Up and running. Aug 13 01:11:49.082249 kernel: SELinux: Initializing. Aug 13 01:11:49.082256 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:11:49.082264 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:11:49.082273 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:11:49.082281 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:11:49.082288 kernel: ... version: 0 Aug 13 01:11:49.082295 kernel: ... bit width: 48 Aug 13 01:11:49.082303 kernel: ... generic registers: 6 Aug 13 01:11:49.082310 kernel: ... value mask: 0000ffffffffffff Aug 13 01:11:49.082317 kernel: ... max period: 00007fffffffffff Aug 13 01:11:49.082324 kernel: ... fixed-purpose events: 0 Aug 13 01:11:49.082332 kernel: ... event mask: 000000000000003f Aug 13 01:11:49.082341 kernel: signal: max sigframe size: 3376 Aug 13 01:11:49.082348 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:11:49.082357 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:11:49.082364 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:11:49.082372 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:11:49.082379 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:11:49.082386 kernel: .... node #0, CPUs: #1 Aug 13 01:11:49.082393 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:11:49.082400 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Aug 13 01:11:49.082410 kernel: Memory: 3961044K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227300K reserved, 0K cma-reserved) Aug 13 01:11:49.082417 kernel: devtmpfs: initialized Aug 13 01:11:49.082425 kernel: x86/mm: Memory block size: 128MB Aug 13 01:11:49.082432 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:11:49.082440 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:11:49.082447 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:11:49.082454 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:11:49.082462 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:11:49.082469 kernel: audit: type=2000 audit(1755047505.181:1): state=initialized audit_enabled=0 res=1 Aug 13 01:11:49.082479 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:11:49.082486 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:11:49.082493 kernel: cpuidle: using governor menu Aug 13 01:11:49.082500 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:11:49.082507 kernel: dca service started, version 1.12.1 Aug 13 01:11:49.082515 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:11:49.082522 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:11:49.082530 kernel: PCI: Using configuration type 1 for base access Aug 13 01:11:49.082537 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:11:49.082547 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:11:49.082554 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:11:49.082561 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:11:49.082568 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:11:49.082576 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:11:49.082583 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:11:49.082590 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:11:49.082597 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:11:49.082605 kernel: ACPI: Interpreter enabled Aug 13 01:11:49.082614 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:11:49.082621 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:11:49.082629 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:11:49.082636 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:11:49.082643 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:11:49.082651 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:11:49.083121 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:11:49.083245 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:11:49.083360 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:11:49.083370 kernel: PCI host bridge to bus 0000:00 Aug 13 01:11:49.083486 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:11:49.083586 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:11:49.083683 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:11:49.083779 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:11:49.086130 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:11:49.086243 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:11:49.086343 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:11:49.086480 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:11:49.086610 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:11:49.086723 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:11:49.086870 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:11:49.087204 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:11:49.087314 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:11:49.087434 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:11:49.087543 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:11:49.087650 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:11:49.087755 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:11:49.090154 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:11:49.090282 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:11:49.090392 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:11:49.090534 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:11:49.090652 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:11:49.090783 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:11:49.091503 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:11:49.091636 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:11:49.091747 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:11:49.093035 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:11:49.093166 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:11:49.093289 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:11:49.093300 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:11:49.093308 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:11:49.093321 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:11:49.093328 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:11:49.093336 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:11:49.093343 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:11:49.093351 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:11:49.093358 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:11:49.093365 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:11:49.093373 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:11:49.093380 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:11:49.093390 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:11:49.093397 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:11:49.093404 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:11:49.093411 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:11:49.093419 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:11:49.093426 kernel: iommu: Default domain type: Translated Aug 13 01:11:49.093434 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:11:49.093442 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:11:49.093449 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:11:49.093459 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:11:49.093467 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:11:49.093574 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:11:49.093689 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:11:49.094514 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:11:49.094530 kernel: vgaarb: loaded Aug 13 01:11:49.094539 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:11:49.094547 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:11:49.094554 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:11:49.094567 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:11:49.094575 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:11:49.094583 kernel: pnp: PnP ACPI init Aug 13 01:11:49.094717 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:11:49.094730 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:11:49.094738 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:11:49.094746 kernel: NET: Registered PF_INET protocol family Aug 13 01:11:49.094753 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:11:49.094764 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:11:49.094772 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:11:49.094780 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:11:49.094788 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:11:49.094795 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:11:49.094833 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:11:49.094842 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:11:49.094849 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:11:49.094857 kernel: NET: Registered PF_XDP protocol family Aug 13 01:11:49.094972 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:11:49.095071 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:11:49.095167 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:11:49.095377 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:11:49.095472 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:11:49.095568 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:11:49.095577 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:11:49.095585 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:11:49.095596 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:11:49.095604 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8554e05d, max_idle_ns: 881590540420 ns Aug 13 01:11:49.095612 kernel: Initialise system trusted keyrings Aug 13 01:11:49.095619 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:11:49.095627 kernel: Key type asymmetric registered Aug 13 01:11:49.095634 kernel: Asymmetric key parser 'x509' registered Aug 13 01:11:49.095642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:11:49.095650 kernel: io scheduler mq-deadline registered Aug 13 01:11:49.095657 kernel: io scheduler kyber registered Aug 13 01:11:49.095667 kernel: io scheduler bfq registered Aug 13 01:11:49.095675 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:11:49.095684 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:11:49.095691 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:11:49.095699 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:11:49.095707 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:11:49.095714 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:11:49.095722 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:11:49.095730 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:11:49.132308 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:11:49.132360 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:11:49.132487 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:11:49.132713 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:11:48 UTC (1755047508) Aug 13 01:11:49.133103 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:11:49.133117 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:11:49.133125 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:11:49.133134 kernel: Segment Routing with IPv6 Aug 13 01:11:49.133150 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:11:49.133159 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:11:49.133167 kernel: Key type dns_resolver registered Aug 13 01:11:49.133176 kernel: IPI shorthand broadcast: enabled Aug 13 01:11:49.133184 kernel: sched_clock: Marking stable (5018195565, 271998738)->(5457051282, -166856979) Aug 13 01:11:49.133192 kernel: registered taskstats version 1 Aug 13 01:11:49.133200 kernel: Loading compiled-in X.509 certificates Aug 13 01:11:49.133208 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:11:49.133216 kernel: Demotion targets for Node 0: null Aug 13 01:11:49.133227 kernel: Key type .fscrypt registered Aug 13 01:11:49.133235 kernel: Key type fscrypt-provisioning registered Aug 13 01:11:49.133243 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:11:49.133251 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:11:49.133259 kernel: ima: No architecture policies found Aug 13 01:11:49.133267 kernel: clk: Disabling unused clocks Aug 13 01:11:49.133275 kernel: Warning: unable to open an initial console. Aug 13 01:11:49.133284 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:11:49.133292 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:11:49.133302 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:11:49.133310 kernel: Run /init as init process Aug 13 01:11:49.133318 kernel: with arguments: Aug 13 01:11:49.133327 kernel: /init Aug 13 01:11:49.133335 kernel: with environment: Aug 13 01:11:49.133343 kernel: HOME=/ Aug 13 01:11:49.133365 kernel: TERM=linux Aug 13 01:11:49.133375 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:11:49.133385 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:11:49.133401 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:11:49.133410 systemd[1]: Detected virtualization kvm. Aug 13 01:11:49.133419 systemd[1]: Detected architecture x86-64. Aug 13 01:11:49.133427 systemd[1]: Running in initrd. Aug 13 01:11:49.133435 systemd[1]: No hostname configured, using default hostname. Aug 13 01:11:49.133444 systemd[1]: Hostname set to . Aug 13 01:11:49.133453 systemd[1]: Initializing machine ID from random generator. Aug 13 01:11:49.133463 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:11:49.133472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:11:49.133481 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:11:49.133491 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:11:49.133500 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:11:49.133509 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:11:49.133518 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:11:49.133530 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:11:49.133539 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:11:49.133548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:11:49.133556 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:11:49.133565 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:11:49.133574 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:11:49.133582 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:11:49.133591 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:11:49.133602 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:11:49.133611 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:11:49.133620 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:11:49.133629 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:11:49.133637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:11:49.133646 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:11:49.133655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:11:49.133664 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:11:49.133675 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:11:49.133683 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:11:49.133692 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:11:49.133701 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:11:49.133710 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:11:49.133721 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:11:49.133730 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:11:49.133944 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:11:49.133953 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:11:49.133962 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:11:49.134010 systemd-journald[205]: Collecting audit messages is disabled. Aug 13 01:11:49.134035 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:11:49.134044 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:11:49.134054 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:11:49.134066 systemd-journald[205]: Journal started Aug 13 01:11:49.134087 systemd-journald[205]: Runtime Journal (/run/log/journal/cc563945b5d342038605f691ba69dcc4) is 8M, max 78.5M, 70.5M free. Aug 13 01:11:49.076234 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 01:11:49.194240 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:11:49.194277 kernel: Bridge firewalling registered Aug 13 01:11:49.194300 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:11:49.136761 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 01:11:49.195197 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:11:49.197759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:11:49.201227 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:11:49.203792 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:11:49.207108 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:11:49.212335 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:11:49.231125 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:11:49.234158 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:11:49.239362 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:11:49.243054 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:11:49.246965 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:11:49.248836 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:11:49.257931 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:11:49.270953 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:11:49.307570 systemd-resolved[245]: Positive Trust Anchors: Aug 13 01:11:49.307587 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:11:49.307618 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:11:49.314111 systemd-resolved[245]: Defaulting to hostname 'linux'. Aug 13 01:11:49.315452 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:11:49.316417 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:11:49.395027 kernel: SCSI subsystem initialized Aug 13 01:11:49.404851 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:11:49.416859 kernel: iscsi: registered transport (tcp) Aug 13 01:11:49.459333 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:11:49.459450 kernel: QLogic iSCSI HBA Driver Aug 13 01:11:49.487920 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:11:49.524596 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:11:49.528547 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:11:49.582004 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:11:49.584862 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:11:49.642847 kernel: raid6: avx2x4 gen() 23742 MB/s Aug 13 01:11:49.660860 kernel: raid6: avx2x2 gen() 23368 MB/s Aug 13 01:11:49.679296 kernel: raid6: avx2x1 gen() 14374 MB/s Aug 13 01:11:49.679361 kernel: raid6: using algorithm avx2x4 gen() 23742 MB/s Aug 13 01:11:49.698474 kernel: raid6: .... xor() 3108 MB/s, rmw enabled Aug 13 01:11:49.698534 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:11:49.719985 kernel: xor: automatically using best checksumming function avx Aug 13 01:11:49.988017 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:11:50.000717 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:11:50.004209 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:11:50.035208 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 13 01:11:50.041383 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:11:50.046424 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:11:50.071789 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Aug 13 01:11:50.103732 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:11:50.106638 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:11:50.184675 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:11:50.188121 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:11:50.248841 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:11:50.257840 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:11:50.261656 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:11:50.439100 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:11:50.442029 kernel: libata version 3.00 loaded. Aug 13 01:11:50.449853 kernel: AES CTR mode by8 optimization enabled Aug 13 01:11:50.473717 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:11:50.474123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:11:50.476727 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:11:50.489002 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:11:50.481117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:11:50.482207 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:11:50.524692 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:11:50.525289 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:11:50.528876 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:11:50.529261 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:11:50.529410 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:11:50.543013 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:11:50.543092 kernel: GPT:9289727 != 9297919 Aug 13 01:11:50.543105 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:11:50.547830 kernel: GPT:9289727 != 9297919 Aug 13 01:11:50.547896 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:11:50.547909 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:11:50.561858 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:11:50.578863 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:11:50.579268 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:11:50.602848 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:11:50.603207 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:11:50.603360 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:11:50.607821 kernel: scsi host1: ahci Aug 13 01:11:50.608868 kernel: scsi host2: ahci Aug 13 01:11:50.609059 kernel: scsi host3: ahci Aug 13 01:11:50.609830 kernel: scsi host4: ahci Aug 13 01:11:50.610829 kernel: scsi host5: ahci Aug 13 01:11:50.611008 kernel: scsi host6: ahci Aug 13 01:11:50.611986 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 01:11:50.612008 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 01:11:50.612019 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 01:11:50.612028 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 01:11:50.612038 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 01:11:50.612052 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 01:11:50.678286 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:11:50.739252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:11:50.749823 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:11:50.759273 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:11:50.760120 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:11:50.769657 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:11:50.772497 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:11:50.810094 disk-uuid[624]: Primary Header is updated. Aug 13 01:11:50.810094 disk-uuid[624]: Secondary Entries is updated. Aug 13 01:11:50.810094 disk-uuid[624]: Secondary Header is updated. Aug 13 01:11:50.820839 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:11:50.837024 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:11:50.925840 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:50.926110 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:50.926122 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:50.926132 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:50.926142 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:50.933191 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:51.055335 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:11:51.087218 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:11:51.087942 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:11:51.089347 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:11:51.092141 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:11:51.133499 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:11:51.841045 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:11:51.841751 disk-uuid[625]: The operation has completed successfully. Aug 13 01:11:51.914358 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:11:51.914503 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:11:51.938339 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:11:51.960353 sh[653]: Success Aug 13 01:11:51.980700 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:11:51.980773 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:11:51.981002 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:11:51.993872 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:11:52.048569 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:11:52.052982 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:11:52.065872 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:11:52.079049 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:11:52.082852 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (665) Aug 13 01:11:52.086191 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:11:52.086271 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:11:52.087918 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:11:52.098479 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:11:52.099799 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:11:52.101239 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:11:52.102176 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:11:52.106939 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:11:52.135867 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (696) Aug 13 01:11:52.138848 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:11:52.138882 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:11:52.142241 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:11:52.153944 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:11:52.156189 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:11:52.159185 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:11:52.294584 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:11:52.353073 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:11:52.586721 systemd-networkd[835]: lo: Link UP Aug 13 01:11:52.586739 systemd-networkd[835]: lo: Gained carrier Aug 13 01:11:52.588568 systemd-networkd[835]: Enumeration completed Aug 13 01:11:52.588675 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:11:52.589357 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:11:52.589362 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:11:52.590650 systemd[1]: Reached target network.target - Network. Aug 13 01:11:52.610293 systemd-networkd[835]: eth0: Link UP Aug 13 01:11:52.610446 systemd-networkd[835]: eth0: Gained carrier Aug 13 01:11:52.610463 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:11:52.651565 ignition[745]: Ignition 2.21.0 Aug 13 01:11:52.651588 ignition[745]: Stage: fetch-offline Aug 13 01:11:52.651638 ignition[745]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:52.651652 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:52.655196 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:11:52.651781 ignition[745]: parsed url from cmdline: "" Aug 13 01:11:52.651787 ignition[745]: no config URL provided Aug 13 01:11:52.651797 ignition[745]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:11:52.651853 ignition[745]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:11:52.657923 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:11:52.651858 ignition[745]: failed to fetch config: resource requires networking Aug 13 01:11:52.652230 ignition[745]: Ignition finished successfully Aug 13 01:11:52.752449 ignition[844]: Ignition 2.21.0 Aug 13 01:11:52.752471 ignition[844]: Stage: fetch Aug 13 01:11:52.752730 ignition[844]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:52.752745 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:52.753093 ignition[844]: parsed url from cmdline: "" Aug 13 01:11:52.753099 ignition[844]: no config URL provided Aug 13 01:11:52.753106 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:11:52.753118 ignition[844]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:11:52.753204 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:11:52.753479 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:11:52.953719 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:11:52.954295 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:11:53.230146 systemd-networkd[835]: eth0: DHCPv4 address 172.234.199.8/24, gateway 172.234.199.1 acquired from 23.192.120.221 Aug 13 01:11:53.354418 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:11:53.460784 ignition[844]: PUT result: OK Aug 13 01:11:53.461097 ignition[844]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:11:53.593593 ignition[844]: GET result: OK Aug 13 01:11:53.594107 ignition[844]: parsing config with SHA512: aa2abba97d31ca287677cbd3862c266a282c85d21e6ef9874512d5a31447b2db3c0532844f4a1012680ae05969fb4fece22225ec7dbf909291489ac4d6c0f7e2 Aug 13 01:11:53.621475 unknown[844]: fetched base config from "system" Aug 13 01:11:53.621761 ignition[844]: fetch: fetch complete Aug 13 01:11:53.621490 unknown[844]: fetched base config from "system" Aug 13 01:11:53.621767 ignition[844]: fetch: fetch passed Aug 13 01:11:53.621495 unknown[844]: fetched user config from "akamai" Aug 13 01:11:53.623097 ignition[844]: Ignition finished successfully Aug 13 01:11:53.627655 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:11:53.635671 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:11:53.671248 ignition[851]: Ignition 2.21.0 Aug 13 01:11:53.671269 ignition[851]: Stage: kargs Aug 13 01:11:53.671427 ignition[851]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:53.671439 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:53.672381 ignition[851]: kargs: kargs passed Aug 13 01:11:53.675281 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:11:53.672428 ignition[851]: Ignition finished successfully Aug 13 01:11:53.678515 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:11:53.704349 ignition[858]: Ignition 2.21.0 Aug 13 01:11:53.704364 ignition[858]: Stage: disks Aug 13 01:11:53.704526 ignition[858]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:53.704544 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:53.708507 ignition[858]: disks: disks passed Aug 13 01:11:53.708606 ignition[858]: Ignition finished successfully Aug 13 01:11:53.713063 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:11:53.714626 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:11:53.715260 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:11:53.716994 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:11:53.718590 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:11:53.719893 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:11:53.723218 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:11:53.782143 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:11:53.785159 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:11:53.790540 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:11:53.946067 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:11:53.946738 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:11:53.948155 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:11:53.950620 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:11:53.953922 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:11:53.955379 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:11:53.955420 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:11:53.955446 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:11:53.972450 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:11:53.974742 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:11:53.986079 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (874) Aug 13 01:11:53.990274 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:11:53.990308 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:11:53.994254 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:11:53.998290 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:11:54.070590 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:11:54.075463 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:11:54.080610 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:11:54.086171 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:11:54.226709 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:11:54.230693 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:11:54.232012 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:11:54.257613 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:11:54.262331 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:11:54.302765 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:11:54.303794 systemd-networkd[835]: eth0: Gained IPv6LL Aug 13 01:11:54.325223 ignition[987]: INFO : Ignition 2.21.0 Aug 13 01:11:54.325223 ignition[987]: INFO : Stage: mount Aug 13 01:11:54.328759 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:54.328759 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:54.328759 ignition[987]: INFO : mount: mount passed Aug 13 01:11:54.328759 ignition[987]: INFO : Ignition finished successfully Aug 13 01:11:54.329572 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:11:54.333102 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:11:54.949134 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:11:55.005467 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (999) Aug 13 01:11:55.005558 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:11:55.007856 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:11:55.010715 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:11:55.017694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:11:55.078720 ignition[1015]: INFO : Ignition 2.21.0 Aug 13 01:11:55.078720 ignition[1015]: INFO : Stage: files Aug 13 01:11:55.080425 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:55.080425 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:55.082829 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:11:55.084181 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:11:55.084181 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:11:55.086208 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:11:55.087030 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:11:55.088024 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:11:55.087065 unknown[1015]: wrote ssh authorized keys file for user: core Aug 13 01:11:55.089591 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 01:11:55.089591 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 01:11:55.322527 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:11:55.579436 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 01:11:55.579436 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:11:55.579436 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:11:55.579436 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:11:55.579436 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:11:55.579436 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:11:55.579436 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:11:55.579436 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:11:55.579436 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:11:55.591081 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:11:55.591081 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:11:55.591081 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:11:55.595619 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:11:55.595619 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:11:55.595619 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 01:11:56.157245 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 01:11:57.671399 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:11:57.671399 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 01:11:57.681025 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:11:57.717207 ignition[1015]: INFO : files: files passed Aug 13 01:11:57.717207 ignition[1015]: INFO : Ignition finished successfully Aug 13 01:11:57.720841 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:11:57.731840 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:11:57.739513 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:11:57.754117 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:11:57.760223 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:11:57.829178 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:11:57.829178 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:11:57.833095 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:11:57.836028 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:11:57.838625 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:11:57.841629 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:11:57.951032 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:11:57.951255 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:11:57.959225 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:11:57.960200 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:11:57.961255 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:11:57.963094 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:11:58.016301 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:11:58.025015 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:11:58.104786 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:11:58.108335 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:11:58.110333 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:11:58.111171 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:11:58.111397 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:11:58.113172 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:11:58.114326 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:11:58.117378 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:11:58.118727 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:11:58.126908 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:11:58.129226 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:11:58.130112 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:11:58.131054 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:11:58.132085 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:11:58.132993 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:11:58.133885 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:11:58.134697 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:11:58.134972 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:11:58.136140 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:11:58.137197 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:11:58.138131 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:11:58.138263 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:11:58.139279 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:11:58.139453 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:11:58.140775 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:11:58.140996 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:11:58.142096 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:11:58.142258 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:11:58.150103 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:11:58.166098 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:11:58.174292 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:11:58.174684 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:11:58.178241 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:11:58.179959 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:11:58.194172 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:11:58.194355 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:11:58.249892 ignition[1070]: INFO : Ignition 2.21.0 Aug 13 01:11:58.249892 ignition[1070]: INFO : Stage: umount Aug 13 01:11:58.249892 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:58.249892 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:58.255252 ignition[1070]: INFO : umount: umount passed Aug 13 01:11:58.255252 ignition[1070]: INFO : Ignition finished successfully Aug 13 01:11:58.259040 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:11:58.267333 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:11:58.270479 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:11:58.270631 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:11:58.272774 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:11:58.273613 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:11:58.276207 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:11:58.276334 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:11:58.277783 systemd[1]: Stopped target network.target - Network. Aug 13 01:11:58.280449 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:11:58.280570 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:11:58.281642 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:11:58.283321 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:11:58.286895 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:11:58.288514 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:11:58.290098 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:11:58.290800 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:11:58.290891 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:11:58.293200 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:11:58.293265 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:11:58.294321 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:11:58.294428 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:11:58.295220 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:11:58.295294 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:11:58.297667 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:11:58.299139 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:11:58.314622 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:11:58.316236 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:11:58.316434 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:11:58.324272 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:11:58.324750 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:11:58.325006 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:11:58.327418 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:11:58.327768 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:11:58.328038 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:11:58.334739 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:11:58.338468 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:11:58.338565 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:11:58.339401 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:11:58.339512 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:11:58.350049 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:11:58.350987 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:11:58.351124 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:11:58.352214 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:11:58.352304 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:11:58.357743 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:11:58.358074 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:11:58.360110 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:11:58.360194 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:11:58.363198 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:11:58.371763 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:11:58.371930 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:11:58.398626 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:11:58.402731 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:11:58.406771 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:11:58.407019 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:11:58.411390 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:11:58.411537 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:11:58.415321 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:11:58.415422 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:11:58.417929 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:11:58.418043 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:11:58.421380 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:11:58.421464 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:11:58.422300 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:11:58.422392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:11:58.425990 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:11:58.426703 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:11:58.426775 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:11:58.429284 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:11:58.429367 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:11:58.431979 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 01:11:58.432150 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:11:58.436144 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:11:58.436284 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:11:58.439478 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:11:58.439602 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:11:58.444279 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 01:11:58.444415 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 01:11:58.444514 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:11:58.444602 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:11:58.456075 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:11:58.456287 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:11:58.459448 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:11:58.465038 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:11:58.493568 systemd[1]: Switching root. Aug 13 01:11:58.545219 systemd-journald[205]: Journal stopped Aug 13 01:12:00.057694 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Aug 13 01:12:00.057736 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:12:00.057749 kernel: SELinux: policy capability open_perms=1 Aug 13 01:12:00.057761 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:12:00.057772 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:12:00.057781 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:12:00.057790 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:12:00.057822 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:12:00.057832 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:12:00.057842 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:12:00.057854 kernel: audit: type=1403 audit(1755047518.745:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:12:00.057864 systemd[1]: Successfully loaded SELinux policy in 77.410ms. Aug 13 01:12:00.057874 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.404ms. Aug 13 01:12:00.057886 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:12:00.057897 systemd[1]: Detected virtualization kvm. Aug 13 01:12:00.057909 systemd[1]: Detected architecture x86-64. Aug 13 01:12:00.057919 systemd[1]: Detected first boot. Aug 13 01:12:00.057930 systemd[1]: Initializing machine ID from random generator. Aug 13 01:12:00.057940 zram_generator::config[1121]: No configuration found. Aug 13 01:12:00.057951 kernel: Guest personality initialized and is inactive Aug 13 01:12:00.057960 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:12:00.057969 kernel: Initialized host personality Aug 13 01:12:00.057981 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:12:00.057991 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:12:00.058190 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:12:00.058235 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:12:00.058246 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:12:00.058257 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:12:00.058267 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:12:00.058281 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:12:00.058292 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:12:00.058302 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:12:00.058313 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:12:00.058323 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:12:00.058333 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:12:00.058343 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:12:00.058356 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:12:00.058366 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:12:00.058376 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:12:00.058386 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:12:00.058400 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:12:00.058411 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:12:00.058421 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:12:00.058432 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:12:00.058445 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:12:00.058455 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:12:00.058467 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:12:00.058478 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:12:00.058488 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:12:00.058499 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:12:00.058509 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:12:00.058520 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:12:00.058533 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:12:00.058544 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:12:00.058554 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:12:00.058564 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:12:00.058575 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:12:00.058588 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:12:00.058599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:12:00.058609 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:12:00.058620 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:12:00.058630 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:12:00.058640 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:12:00.058651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:12:00.058662 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:12:00.058675 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:12:00.058685 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:12:00.058698 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:12:00.058708 systemd[1]: Reached target machines.target - Containers. Aug 13 01:12:00.058719 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:12:00.058730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:12:00.058740 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:12:00.058751 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:12:00.058764 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:12:00.058774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:12:00.058785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:12:00.058795 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:12:00.062178 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:12:00.062195 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:12:00.062207 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:12:00.062218 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:12:00.062228 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:12:00.062242 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:12:00.062254 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:12:00.062265 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:12:00.062276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:12:00.062287 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:12:00.062297 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:12:00.062310 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:12:00.062320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:12:00.062333 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:12:00.062344 systemd[1]: Stopped verity-setup.service. Aug 13 01:12:00.062355 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:12:00.062366 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:12:00.062377 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:12:00.062388 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:12:00.062398 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:12:00.062409 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:12:00.062422 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:12:00.062433 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:12:00.062476 systemd-journald[1184]: Collecting audit messages is disabled. Aug 13 01:12:00.062500 kernel: ACPI: bus type drm_connector registered Aug 13 01:12:00.062513 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:12:00.062524 kernel: fuse: init (API version 7.41) Aug 13 01:12:00.062535 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:12:00.062545 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:12:00.062556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:12:00.062567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:12:00.062578 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:12:00.062610 kernel: loop: module loaded Aug 13 01:12:00.062621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:12:00.062636 systemd-journald[1184]: Journal started Aug 13 01:12:00.062666 systemd-journald[1184]: Runtime Journal (/run/log/journal/67adabbd0adb43ebb4b00701e2405f7b) is 8M, max 78.5M, 70.5M free. Aug 13 01:11:59.440765 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:12:00.065064 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:11:59.467426 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:11:59.468061 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:12:00.068866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:12:00.069152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:12:00.070029 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:12:00.070273 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:12:00.071100 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:12:00.071596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:12:00.072946 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:12:00.074248 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:12:00.076258 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:12:00.094200 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:12:00.097435 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:12:00.100901 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:12:00.103961 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:12:00.106886 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:12:00.106929 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:12:00.109085 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:12:00.120246 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:12:00.121480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:12:00.125485 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:12:00.128931 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:12:00.129716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:12:00.132029 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:12:00.132902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:12:00.135949 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:12:00.140072 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:12:00.143190 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:12:00.153097 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:12:00.154607 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:12:00.195008 systemd-journald[1184]: Time spent on flushing to /var/log/journal/67adabbd0adb43ebb4b00701e2405f7b is 73.373ms for 1001 entries. Aug 13 01:12:00.195008 systemd-journald[1184]: System Journal (/var/log/journal/67adabbd0adb43ebb4b00701e2405f7b) is 8M, max 195.6M, 187.6M free. Aug 13 01:12:00.371351 systemd-journald[1184]: Received client request to flush runtime journal. Aug 13 01:12:00.371404 kernel: loop0: detected capacity change from 0 to 8 Aug 13 01:12:00.371430 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:12:00.371455 kernel: loop1: detected capacity change from 0 to 224512 Aug 13 01:12:00.225345 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:12:00.226139 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:12:00.235343 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:12:00.278968 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:12:00.340420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:12:00.369307 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:12:00.378458 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:12:00.410467 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Aug 13 01:12:00.410489 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Aug 13 01:12:00.436289 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:12:00.443093 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:12:00.448830 kernel: loop2: detected capacity change from 0 to 146240 Aug 13 01:12:00.483723 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:12:00.503841 kernel: loop3: detected capacity change from 0 to 113872 Aug 13 01:12:00.544939 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:12:00.549956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:12:00.560898 kernel: loop4: detected capacity change from 0 to 8 Aug 13 01:12:00.582163 kernel: loop5: detected capacity change from 0 to 224512 Aug 13 01:12:00.611777 kernel: loop6: detected capacity change from 0 to 146240 Aug 13 01:12:00.678613 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Aug 13 01:12:00.678636 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Aug 13 01:12:00.699708 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:12:00.706852 kernel: loop7: detected capacity change from 0 to 113872 Aug 13 01:12:00.739175 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:12:00.740564 (sd-merge)[1263]: Merged extensions into '/usr'. Aug 13 01:12:00.763970 systemd[1]: Reload requested from client PID 1239 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:12:00.763996 systemd[1]: Reloading... Aug 13 01:12:01.010865 zram_generator::config[1288]: No configuration found. Aug 13 01:12:01.197333 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:12:01.384875 systemd[1]: Reloading finished in 620 ms. Aug 13 01:12:01.406300 ldconfig[1234]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:12:01.413449 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:12:01.415062 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:12:01.429157 systemd[1]: Starting ensure-sysext.service... Aug 13 01:12:01.431243 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:12:01.482943 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:12:01.482962 systemd[1]: Reloading... Aug 13 01:12:01.509987 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:12:01.510534 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:12:01.510953 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:12:01.511333 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:12:01.512479 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:12:01.515780 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Aug 13 01:12:01.517220 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Aug 13 01:12:01.528418 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:12:01.528640 systemd-tmpfiles[1336]: Skipping /boot Aug 13 01:12:01.547883 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:12:01.547936 systemd-tmpfiles[1336]: Skipping /boot Aug 13 01:12:01.604846 zram_generator::config[1359]: No configuration found. Aug 13 01:12:01.730587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:12:01.839244 systemd[1]: Reloading finished in 355 ms. Aug 13 01:12:01.855243 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:12:01.873290 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:12:01.882537 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:12:01.885995 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:12:01.894844 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:12:01.898352 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:12:01.904886 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:12:01.911019 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:12:01.917941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:12:01.918786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:12:01.923306 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:12:01.926465 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:12:01.933014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:12:01.933730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:12:01.933913 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:12:01.934014 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:12:01.940633 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:12:01.947431 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:12:01.947632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:12:01.949861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:12:01.950007 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:12:01.950140 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:12:01.956703 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:12:01.957978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:12:01.965045 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:12:01.965908 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:12:01.966045 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:12:01.966221 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:12:01.972350 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:12:01.974537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:12:01.974797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:12:01.983121 systemd[1]: Finished ensure-sysext.service. Aug 13 01:12:01.984414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:12:01.984682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:12:01.995200 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:12:02.007938 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:12:02.011548 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:12:02.014258 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:12:02.014548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:12:02.016478 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:12:02.016711 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:12:02.020436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:12:02.023225 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:12:02.044892 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Aug 13 01:12:02.054937 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:12:02.067434 augenrules[1450]: No rules Aug 13 01:12:02.067258 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:12:02.069535 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:12:02.074345 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:12:02.074891 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:12:02.076419 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:12:02.097340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:12:02.103899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:12:02.226572 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:12:02.276262 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:12:02.278031 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:12:02.332833 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:12:02.335832 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:12:02.347925 systemd-resolved[1411]: Positive Trust Anchors: Aug 13 01:12:02.347954 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:12:02.348000 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:12:02.353667 systemd-resolved[1411]: Defaulting to hostname 'linux'. Aug 13 01:12:02.356069 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:12:02.356957 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:12:02.357731 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:12:02.358558 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:12:02.359314 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:12:02.360044 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:12:02.363656 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:12:02.365060 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:12:02.365937 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:12:02.366688 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:12:02.366739 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:12:02.368374 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:12:02.369277 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:12:02.371862 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:12:02.374922 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:12:02.382079 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:12:02.384053 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:12:02.384628 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:12:02.394510 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:12:02.398124 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:12:02.399855 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:12:02.402611 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:12:02.403891 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:12:02.404925 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:12:02.404971 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:12:02.413272 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:12:02.413563 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:12:02.407168 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:12:02.416053 systemd-networkd[1463]: lo: Link UP Aug 13 01:12:02.416062 systemd-networkd[1463]: lo: Gained carrier Aug 13 01:12:02.417060 systemd-networkd[1463]: Enumeration completed Aug 13 01:12:02.426083 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:12:02.429849 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:12:02.433966 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:12:02.441196 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:12:02.442154 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:12:02.448336 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:12:02.463580 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:12:02.476885 jq[1505]: false Aug 13 01:12:02.477654 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:12:02.481071 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:12:02.484284 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:12:02.492866 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:12:02.495413 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:12:02.496636 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:12:02.498056 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:12:02.501084 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:12:02.503538 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:12:02.507286 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:12:02.509424 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:12:02.509705 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:12:02.517937 systemd[1]: Reached target network.target - Network. Aug 13 01:12:02.529500 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:12:02.545127 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:12:02.550553 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:12:02.569690 jq[1518]: true Aug 13 01:12:02.573261 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:12:02.573567 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:12:02.579828 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Refreshing passwd entry cache Aug 13 01:12:02.579998 oslogin_cache_refresh[1508]: Refreshing passwd entry cache Aug 13 01:12:02.592710 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Failure getting users, quitting Aug 13 01:12:02.592702 oslogin_cache_refresh[1508]: Failure getting users, quitting Aug 13 01:12:02.592887 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:12:02.592887 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Refreshing group entry cache Aug 13 01:12:02.592727 oslogin_cache_refresh[1508]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:12:02.592772 oslogin_cache_refresh[1508]: Refreshing group entry cache Aug 13 01:12:02.597666 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Failure getting groups, quitting Aug 13 01:12:02.597666 google_oslogin_nss_cache[1508]: oslogin_cache_refresh[1508]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:12:02.597661 oslogin_cache_refresh[1508]: Failure getting groups, quitting Aug 13 01:12:02.597671 oslogin_cache_refresh[1508]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:12:02.607386 tar[1520]: linux-amd64/LICENSE Aug 13 01:12:02.607707 tar[1520]: linux-amd64/helm Aug 13 01:12:02.619356 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:12:02.620150 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:12:02.632171 update_engine[1517]: I20250813 01:12:02.630251 1517 main.cc:92] Flatcar Update Engine starting Aug 13 01:12:02.640622 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:12:02.645435 dbus-daemon[1503]: [system] SELinux support is enabled Aug 13 01:12:02.645612 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:12:02.649997 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:12:02.650029 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:12:02.651910 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:12:02.651931 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:12:02.653608 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:12:02.657500 jq[1542]: true Aug 13 01:12:02.671219 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:12:02.674049 update_engine[1517]: I20250813 01:12:02.673890 1517 update_check_scheduler.cc:74] Next update check in 2m17s Aug 13 01:12:02.675424 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:12:02.702959 coreos-metadata[1502]: Aug 13 01:12:02.702 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:12:02.717118 extend-filesystems[1506]: Found /dev/sda6 Aug 13 01:12:02.723362 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:12:02.723698 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:12:02.741083 extend-filesystems[1506]: Found /dev/sda9 Aug 13 01:12:02.754988 extend-filesystems[1506]: Checking size of /dev/sda9 Aug 13 01:12:02.941531 sshd_keygen[1541]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:12:02.954972 bash[1573]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:12:02.947398 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:12:02.954334 systemd[1]: Starting sshkeys.service... Aug 13 01:12:03.065612 systemd-networkd[1463]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:12:03.065631 systemd-networkd[1463]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:12:03.078184 systemd-networkd[1463]: eth0: Link UP Aug 13 01:12:03.078458 systemd-networkd[1463]: eth0: Gained carrier Aug 13 01:12:03.078490 systemd-networkd[1463]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:12:03.135728 extend-filesystems[1506]: Resized partition /dev/sda9 Aug 13 01:12:03.173438 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:12:03.182698 extend-filesystems[1585]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:12:03.188256 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:12:03.193886 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:12:03.258991 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:12:03.308106 extend-filesystems[1585]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:12:03.308106 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:12:03.308106 extend-filesystems[1585]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:12:03.397691 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:12:03.402677 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:12:03.570501 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:12:03.570879 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:12:03.581425 extend-filesystems[1506]: Resized filesystem in /dev/sda9 Aug 13 01:12:03.583221 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:12:03.583730 systemd-logind[1515]: New seat seat0. Aug 13 01:12:03.585396 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:12:03.600977 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:12:03.666233 coreos-metadata[1587]: Aug 13 01:12:03.635 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:12:03.745275 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:12:03.797297 coreos-metadata[1502]: Aug 13 01:12:03.796 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:12:03.999338 containerd[1543]: time="2025-08-13T01:12:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:12:04.016529 locksmithd[1550]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:12:04.033490 containerd[1543]: time="2025-08-13T01:12:04.033439930Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:12:04.125874 containerd[1543]: time="2025-08-13T01:12:04.125795155Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="30.68µs" Aug 13 01:12:04.126585 containerd[1543]: time="2025-08-13T01:12:04.126562826Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:12:04.126684 containerd[1543]: time="2025-08-13T01:12:04.126667646Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:12:04.127184 containerd[1543]: time="2025-08-13T01:12:04.127150297Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:12:04.127264 containerd[1543]: time="2025-08-13T01:12:04.127248577Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:12:04.127639 containerd[1543]: time="2025-08-13T01:12:04.127616228Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:12:04.128452 containerd[1543]: time="2025-08-13T01:12:04.127791489Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:12:04.128527 containerd[1543]: time="2025-08-13T01:12:04.128510400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:12:04.128977 containerd[1543]: time="2025-08-13T01:12:04.128953011Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:12:04.129055 containerd[1543]: time="2025-08-13T01:12:04.129040501Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:12:04.129105 containerd[1543]: time="2025-08-13T01:12:04.129091411Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:12:04.129165 containerd[1543]: time="2025-08-13T01:12:04.129150501Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:12:04.129307 containerd[1543]: time="2025-08-13T01:12:04.129289542Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:12:04.129679 containerd[1543]: time="2025-08-13T01:12:04.129658942Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:12:04.130170 containerd[1543]: time="2025-08-13T01:12:04.130151123Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:12:04.130226 containerd[1543]: time="2025-08-13T01:12:04.130213323Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:12:04.130310 containerd[1543]: time="2025-08-13T01:12:04.130295554Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:12:04.131492 containerd[1543]: time="2025-08-13T01:12:04.131471936Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:12:04.131619 containerd[1543]: time="2025-08-13T01:12:04.131602936Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:12:04.151389 containerd[1543]: time="2025-08-13T01:12:04.151339746Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:12:04.151894 containerd[1543]: time="2025-08-13T01:12:04.151874987Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:12:04.152051 containerd[1543]: time="2025-08-13T01:12:04.152034117Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:12:04.152129 containerd[1543]: time="2025-08-13T01:12:04.152115027Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:12:04.152199 containerd[1543]: time="2025-08-13T01:12:04.152186847Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:12:04.152289 containerd[1543]: time="2025-08-13T01:12:04.152274928Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:12:04.152358 containerd[1543]: time="2025-08-13T01:12:04.152346368Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:12:04.152406 containerd[1543]: time="2025-08-13T01:12:04.152395568Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:12:04.152473 containerd[1543]: time="2025-08-13T01:12:04.152461638Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:12:04.152529 containerd[1543]: time="2025-08-13T01:12:04.152507058Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:12:04.152590 containerd[1543]: time="2025-08-13T01:12:04.152577838Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:12:04.152685 containerd[1543]: time="2025-08-13T01:12:04.152670898Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:12:04.153938 containerd[1543]: time="2025-08-13T01:12:04.153920381Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:12:04.154025 containerd[1543]: time="2025-08-13T01:12:04.154011671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:12:04.154093 containerd[1543]: time="2025-08-13T01:12:04.154079961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:12:04.154148 containerd[1543]: time="2025-08-13T01:12:04.154136061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:12:04.154204 containerd[1543]: time="2025-08-13T01:12:04.154181271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:12:04.154285 containerd[1543]: time="2025-08-13T01:12:04.154246861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:12:04.154376 containerd[1543]: time="2025-08-13T01:12:04.154361602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:12:04.154448 containerd[1543]: time="2025-08-13T01:12:04.154435342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:12:04.154542 containerd[1543]: time="2025-08-13T01:12:04.154527722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:12:04.154604 containerd[1543]: time="2025-08-13T01:12:04.154592742Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:12:04.154667 containerd[1543]: time="2025-08-13T01:12:04.154653992Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:12:04.154899 containerd[1543]: time="2025-08-13T01:12:04.154862653Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:12:04.154974 containerd[1543]: time="2025-08-13T01:12:04.154962593Z" level=info msg="Start snapshots syncer" Aug 13 01:12:04.155065 containerd[1543]: time="2025-08-13T01:12:04.155051833Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:12:04.155488 containerd[1543]: time="2025-08-13T01:12:04.155439624Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:12:04.155675 containerd[1543]: time="2025-08-13T01:12:04.155658264Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:12:04.158363 containerd[1543]: time="2025-08-13T01:12:04.155957865Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:12:04.158510 containerd[1543]: time="2025-08-13T01:12:04.158340150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:12:04.158607 containerd[1543]: time="2025-08-13T01:12:04.158592410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:12:04.159794 containerd[1543]: time="2025-08-13T01:12:04.159768903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:12:04.159924 containerd[1543]: time="2025-08-13T01:12:04.159908433Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:12:04.160326 containerd[1543]: time="2025-08-13T01:12:04.160303834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:12:04.160438 containerd[1543]: time="2025-08-13T01:12:04.160421514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:12:04.161518 containerd[1543]: time="2025-08-13T01:12:04.161497506Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:12:04.161642 containerd[1543]: time="2025-08-13T01:12:04.161589356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:12:04.161714 containerd[1543]: time="2025-08-13T01:12:04.161699566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:12:04.161790 containerd[1543]: time="2025-08-13T01:12:04.161773417Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:12:04.161926 containerd[1543]: time="2025-08-13T01:12:04.161895097Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:12:04.162225 containerd[1543]: time="2025-08-13T01:12:04.162201927Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:12:04.162389 containerd[1543]: time="2025-08-13T01:12:04.162373158Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:12:04.162467 containerd[1543]: time="2025-08-13T01:12:04.162449908Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:12:04.162542 containerd[1543]: time="2025-08-13T01:12:04.162529268Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:12:04.162625 containerd[1543]: time="2025-08-13T01:12:04.162612868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:12:04.162713 containerd[1543]: time="2025-08-13T01:12:04.162696958Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:12:04.162797 containerd[1543]: time="2025-08-13T01:12:04.162783779Z" level=info msg="runtime interface created" Aug 13 01:12:04.162951 containerd[1543]: time="2025-08-13T01:12:04.162912399Z" level=info msg="created NRI interface" Aug 13 01:12:04.163037 containerd[1543]: time="2025-08-13T01:12:04.163001429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:12:04.163216 containerd[1543]: time="2025-08-13T01:12:04.163082939Z" level=info msg="Connect containerd service" Aug 13 01:12:04.163362 containerd[1543]: time="2025-08-13T01:12:04.163300020Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:12:04.197338 containerd[1543]: time="2025-08-13T01:12:04.196433486Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:12:04.249971 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:12:04.270799 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:12:04.323124 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:12:04.324015 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:12:04.521923 systemd-networkd[1463]: eth0: DHCPv4 address 172.234.199.8/24, gateway 172.234.199.1 acquired from 23.192.120.221 Aug 13 01:12:04.523787 dbus-daemon[1503]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1463 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:12:04.525351 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:12:04.573884 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:12:04.621420 systemd-networkd[1463]: eth0: Gained IPv6LL Aug 13 01:12:04.622462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:12:04.626113 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:12:04.674074 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:12:04.709885 coreos-metadata[1587]: Aug 13 01:12:04.696 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:12:04.745306 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:12:04.747624 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:12:04.776182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:12:04.841479 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:12:04.960942 systemd-logind[1515]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:12:04.968355 coreos-metadata[1587]: Aug 13 01:12:04.967 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:12:05.141663 coreos-metadata[1587]: Aug 13 01:12:05.141 INFO Fetch successful Aug 13 01:12:05.213000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:12:05.302856 update-ssh-keys[1643]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:12:05.308359 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:12:05.315778 systemd[1]: Finished sshkeys.service. Aug 13 01:12:05.326383 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:12:05.333588 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:12:05.339858 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:12:05.583372 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:12:05.739506 systemd[1]: Started sshd@0-172.234.199.8:22-147.75.109.163:59196.service - OpenSSH per-connection server daemon (147.75.109.163:59196). Aug 13 01:12:05.740625 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:12:05.749646 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:12:05.794077 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:12:05.794996 dbus-daemon[1503]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1630 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:12:05.811639 containerd[1543]: time="2025-08-13T01:12:05.811577305Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:12:05.812246 containerd[1543]: time="2025-08-13T01:12:05.811661246Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:12:05.812246 containerd[1543]: time="2025-08-13T01:12:05.811690936Z" level=info msg="Start subscribing containerd event" Aug 13 01:12:05.812246 containerd[1543]: time="2025-08-13T01:12:05.811722116Z" level=info msg="Start recovering state" Aug 13 01:12:05.832900 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:12:05.870243 containerd[1543]: time="2025-08-13T01:12:05.870172733Z" level=info msg="Start event monitor" Aug 13 01:12:05.870243 containerd[1543]: time="2025-08-13T01:12:05.870245493Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:12:05.870243 containerd[1543]: time="2025-08-13T01:12:05.870257973Z" level=info msg="Start streaming server" Aug 13 01:12:05.870481 containerd[1543]: time="2025-08-13T01:12:05.870268783Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:12:05.870481 containerd[1543]: time="2025-08-13T01:12:05.870277823Z" level=info msg="runtime interface starting up..." Aug 13 01:12:05.870481 containerd[1543]: time="2025-08-13T01:12:05.870284283Z" level=info msg="starting plugins..." Aug 13 01:12:05.870481 containerd[1543]: time="2025-08-13T01:12:05.870318683Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:12:05.871324 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:12:05.873107 containerd[1543]: time="2025-08-13T01:12:05.873059508Z" level=info msg="containerd successfully booted in 1.874283s" Aug 13 01:12:05.901371 coreos-metadata[1502]: Aug 13 01:12:05.900 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Aug 13 01:12:06.106708 coreos-metadata[1502]: Aug 13 01:12:06.106 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:12:06.190964 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:12:06.381798 coreos-metadata[1502]: Aug 13 01:12:06.381 INFO Fetch successful Aug 13 01:12:06.381798 coreos-metadata[1502]: Aug 13 01:12:06.381 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:12:06.454491 sshd[1663]: Accepted publickey for core from 147.75.109.163 port 59196 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:12:06.458551 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:12:06.474591 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:12:06.509911 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:12:06.572903 systemd-logind[1515]: New session 1 of user core. Aug 13 01:12:06.632846 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:12:06.677436 polkitd[1666]: Started polkitd version 126 Aug 13 01:12:06.686054 coreos-metadata[1502]: Aug 13 01:12:06.680 INFO Fetch successful Aug 13 01:12:06.681036 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:12:06.722370 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:12:06.724364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:12:06.725794 polkitd[1666]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:12:06.727402 polkitd[1666]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:12:06.727487 polkitd[1666]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:12:06.728072 polkitd[1666]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:12:06.728111 polkitd[1666]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:12:06.728174 polkitd[1666]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:12:06.729829 polkitd[1666]: Finished loading, compiling and executing 2 rules Aug 13 01:12:06.731062 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:12:06.733076 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:12:06.734740 systemd-logind[1515]: New session c1 of user core. Aug 13 01:12:06.744358 polkitd[1666]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:12:06.934662 systemd-resolved[1411]: System hostname changed to '172-234-199-8'. Aug 13 01:12:06.934839 systemd-hostnamed[1630]: Hostname set to <172-234-199-8> (transient) Aug 13 01:12:06.946754 tar[1520]: linux-amd64/README.md Aug 13 01:12:07.015791 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:12:07.081076 systemd[1677]: Queued start job for default target default.target. Aug 13 01:12:07.092519 systemd[1677]: Created slice app.slice - User Application Slice. Aug 13 01:12:07.092558 systemd[1677]: Reached target paths.target - Paths. Aug 13 01:12:07.092955 systemd[1677]: Reached target timers.target - Timers. Aug 13 01:12:07.096772 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:12:07.118937 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:12:07.120076 systemd[1677]: Reached target sockets.target - Sockets. Aug 13 01:12:07.120228 systemd[1677]: Reached target basic.target - Basic System. Aug 13 01:12:07.120315 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:12:07.120944 systemd[1677]: Reached target default.target - Main User Target. Aug 13 01:12:07.120982 systemd[1677]: Startup finished in 372ms. Aug 13 01:12:07.157388 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:12:07.159362 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:12:07.165722 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:12:07.463579 systemd[1]: Started sshd@1-172.234.199.8:22-147.75.109.163:50354.service - OpenSSH per-connection server daemon (147.75.109.163:50354). Aug 13 01:12:07.679850 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:12:07.848894 sshd[1718]: Accepted publickey for core from 147.75.109.163 port 50354 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:12:07.851887 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:12:07.859243 systemd-logind[1515]: New session 2 of user core. Aug 13 01:12:07.878485 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:12:08.128092 sshd[1720]: Connection closed by 147.75.109.163 port 50354 Aug 13 01:12:08.128871 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:08.134949 systemd[1]: sshd@1-172.234.199.8:22-147.75.109.163:50354.service: Deactivated successfully. Aug 13 01:12:08.137613 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:12:08.140708 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:12:08.142324 systemd-logind[1515]: Removed session 2. Aug 13 01:12:08.207599 systemd[1]: Started sshd@2-172.234.199.8:22-147.75.109.163:50356.service - OpenSSH per-connection server daemon (147.75.109.163:50356). Aug 13 01:12:08.552432 sshd[1726]: Accepted publickey for core from 147.75.109.163 port 50356 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:12:08.554658 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:12:08.561270 systemd-logind[1515]: New session 3 of user core. Aug 13 01:12:08.568995 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:12:08.834676 sshd[1728]: Connection closed by 147.75.109.163 port 50356 Aug 13 01:12:08.836174 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:08.841873 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:12:08.842530 systemd[1]: sshd@2-172.234.199.8:22-147.75.109.163:50356.service: Deactivated successfully. Aug 13 01:12:08.845655 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:12:08.854919 systemd-logind[1515]: Removed session 3. Aug 13 01:12:08.885124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:12:08.886352 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:12:08.890443 systemd[1]: Startup finished in 5.108s (kernel) + 10.029s (initrd) + 10.220s (userspace) = 25.358s. Aug 13 01:12:08.941462 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:12:09.913615 kubelet[1738]: E0813 01:12:09.913553 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:12:09.918281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:12:09.918547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:12:09.919529 systemd[1]: kubelet.service: Consumed 2.769s CPU time, 266.7M memory peak. Aug 13 01:12:18.912495 systemd[1]: Started sshd@3-172.234.199.8:22-147.75.109.163:55966.service - OpenSSH per-connection server daemon (147.75.109.163:55966). Aug 13 01:12:19.280897 sshd[1750]: Accepted publickey for core from 147.75.109.163 port 55966 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:12:19.283123 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:12:19.291274 systemd-logind[1515]: New session 4 of user core. Aug 13 01:12:19.305061 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:12:19.536960 sshd[1752]: Connection closed by 147.75.109.163 port 55966 Aug 13 01:12:19.538333 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:19.543981 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:12:19.544645 systemd[1]: sshd@3-172.234.199.8:22-147.75.109.163:55966.service: Deactivated successfully. Aug 13 01:12:19.547542 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:12:19.549694 systemd-logind[1515]: Removed session 4. Aug 13 01:12:19.604983 systemd[1]: Started sshd@4-172.234.199.8:22-147.75.109.163:55980.service - OpenSSH per-connection server daemon (147.75.109.163:55980). Aug 13 01:12:19.974568 sshd[1758]: Accepted publickey for core from 147.75.109.163 port 55980 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:12:19.976586 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:12:19.977794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:12:19.980975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:12:19.984856 systemd-logind[1515]: New session 5 of user core. Aug 13 01:12:19.994038 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:12:20.176024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:12:20.192466 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:12:20.228941 sshd[1763]: Connection closed by 147.75.109.163 port 55980 Aug 13 01:12:20.229821 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:20.236388 systemd[1]: sshd@4-172.234.199.8:22-147.75.109.163:55980.service: Deactivated successfully. Aug 13 01:12:20.239629 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:12:20.245012 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:12:20.247022 systemd-logind[1515]: Removed session 5. Aug 13 01:12:20.329889 systemd[1]: Started sshd@5-172.234.199.8:22-147.75.109.163:55990.service - OpenSSH per-connection server daemon (147.75.109.163:55990). Aug 13 01:12:20.337959 kubelet[1769]: E0813 01:12:20.337894 1769 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:12:20.344691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:12:20.345621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:12:20.346963 systemd[1]: kubelet.service: Consumed 313ms CPU time, 108.4M memory peak. Aug 13 01:12:20.717537 sshd[1780]: Accepted publickey for core from 147.75.109.163 port 55990 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:12:20.719794 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:12:20.728737 systemd-logind[1515]: New session 6 of user core. Aug 13 01:12:20.734989 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:12:20.973825 sshd[1783]: Connection closed by 147.75.109.163 port 55990 Aug 13 01:12:20.974981 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:20.980753 systemd[1]: sshd@5-172.234.199.8:22-147.75.109.163:55990.service: Deactivated successfully. Aug 13 01:12:20.983407 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:12:20.984343 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:12:20.986020 systemd-logind[1515]: Removed session 6. Aug 13 01:12:21.036679 systemd[1]: Started sshd@6-172.234.199.8:22-147.75.109.163:55992.service - OpenSSH per-connection server daemon (147.75.109.163:55992). Aug 13 01:12:21.390455 sshd[1789]: Accepted publickey for core from 147.75.109.163 port 55992 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:12:21.392687 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:12:21.399889 systemd-logind[1515]: New session 7 of user core. Aug 13 01:12:21.416068 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:12:21.598789 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:12:21.599142 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:12:21.618142 sudo[1792]: pam_unix(sudo:session): session closed for user root Aug 13 01:12:21.668507 sshd[1791]: Connection closed by 147.75.109.163 port 55992 Aug 13 01:12:21.670129 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:21.675603 systemd[1]: sshd@6-172.234.199.8:22-147.75.109.163:55992.service: Deactivated successfully. Aug 13 01:12:21.678463 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:12:21.679924 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:12:21.681881 systemd-logind[1515]: Removed session 7. Aug 13 01:12:21.736151 systemd[1]: Started sshd@7-172.234.199.8:22-147.75.109.163:56002.service - OpenSSH per-connection server daemon (147.75.109.163:56002). Aug 13 01:12:22.087885 sshd[1798]: Accepted publickey for core from 147.75.109.163 port 56002 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:12:22.089983 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:12:22.096104 systemd-logind[1515]: New session 8 of user core. Aug 13 01:12:22.104964 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:12:22.285467 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:12:22.285776 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:12:22.291659 sudo[1802]: pam_unix(sudo:session): session closed for user root Aug 13 01:12:22.298426 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:12:22.298722 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:12:22.310011 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:12:22.359766 augenrules[1824]: No rules Aug 13 01:12:22.360588 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:12:22.360898 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:12:22.362661 sudo[1801]: pam_unix(sudo:session): session closed for user root Aug 13 01:12:22.413014 sshd[1800]: Connection closed by 147.75.109.163 port 56002 Aug 13 01:12:22.413986 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:22.419037 systemd[1]: sshd@7-172.234.199.8:22-147.75.109.163:56002.service: Deactivated successfully. Aug 13 01:12:22.422970 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:12:22.426602 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:12:22.428207 systemd-logind[1515]: Removed session 8. Aug 13 01:12:22.481529 systemd[1]: Started sshd@8-172.234.199.8:22-147.75.109.163:56018.service - OpenSSH per-connection server daemon (147.75.109.163:56018). Aug 13 01:12:22.834858 sshd[1833]: Accepted publickey for core from 147.75.109.163 port 56018 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:12:22.836962 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:12:22.842364 systemd-logind[1515]: New session 9 of user core. Aug 13 01:12:22.849986 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:12:23.036340 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:12:23.036716 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:12:24.281657 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:12:24.296304 (dockerd)[1854]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:12:25.282828 dockerd[1854]: time="2025-08-13T01:12:25.282726459Z" level=info msg="Starting up" Aug 13 01:12:25.301859 dockerd[1854]: time="2025-08-13T01:12:25.301779807Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 01:12:25.380633 dockerd[1854]: time="2025-08-13T01:12:25.380561794Z" level=info msg="Loading containers: start." Aug 13 01:12:25.391836 kernel: Initializing XFRM netlink socket Aug 13 01:12:25.645978 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:12:26.825367 systemd-timesyncd[1436]: Contacted time server [2600:3c02:e001:1d00::123:0]:123 (2.flatcar.pool.ntp.org). Aug 13 01:12:26.825461 systemd-resolved[1411]: Clock change detected. Flushing caches. Aug 13 01:12:26.825588 systemd-timesyncd[1436]: Initial clock synchronization to Wed 2025-08-13 01:12:26.824621 UTC. Aug 13 01:12:26.833604 systemd-networkd[1463]: docker0: Link UP Aug 13 01:12:26.837444 dockerd[1854]: time="2025-08-13T01:12:26.837400476Z" level=info msg="Loading containers: done." Aug 13 01:12:26.855564 dockerd[1854]: time="2025-08-13T01:12:26.855508562Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:12:26.855955 dockerd[1854]: time="2025-08-13T01:12:26.855631272Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 01:12:26.855955 dockerd[1854]: time="2025-08-13T01:12:26.855949353Z" level=info msg="Initializing buildkit" Aug 13 01:12:26.887560 dockerd[1854]: time="2025-08-13T01:12:26.887490886Z" level=info msg="Completed buildkit initialization" Aug 13 01:12:26.892880 dockerd[1854]: time="2025-08-13T01:12:26.892239756Z" level=info msg="Daemon has completed initialization" Aug 13 01:12:26.892501 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:12:26.895289 dockerd[1854]: time="2025-08-13T01:12:26.893338168Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:12:28.031598 containerd[1543]: time="2025-08-13T01:12:28.031542324Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 01:12:28.869658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357214004.mount: Deactivated successfully. Aug 13 01:12:30.965383 containerd[1543]: time="2025-08-13T01:12:30.965285030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:30.966439 containerd[1543]: time="2025-08-13T01:12:30.966408872Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 01:12:30.967049 containerd[1543]: time="2025-08-13T01:12:30.967019763Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:30.969629 containerd[1543]: time="2025-08-13T01:12:30.969593248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:30.971555 containerd[1543]: time="2025-08-13T01:12:30.971230382Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 2.939638128s" Aug 13 01:12:30.971555 containerd[1543]: time="2025-08-13T01:12:30.971268942Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 01:12:30.971888 containerd[1543]: time="2025-08-13T01:12:30.971854933Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 01:12:31.658285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:12:31.660549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:12:31.881251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:12:31.890649 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:12:32.012250 kubelet[2117]: E0813 01:12:32.012034 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:12:32.012993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:12:32.013198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:12:32.013839 systemd[1]: kubelet.service: Consumed 309ms CPU time, 109.8M memory peak. Aug 13 01:12:33.500406 containerd[1543]: time="2025-08-13T01:12:33.500321779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:33.501641 containerd[1543]: time="2025-08-13T01:12:33.501573391Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 01:12:33.503368 containerd[1543]: time="2025-08-13T01:12:33.502169142Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:33.504873 containerd[1543]: time="2025-08-13T01:12:33.504843668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:33.506306 containerd[1543]: time="2025-08-13T01:12:33.506275531Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 2.534388228s" Aug 13 01:12:33.506443 containerd[1543]: time="2025-08-13T01:12:33.506419381Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 01:12:33.507187 containerd[1543]: time="2025-08-13T01:12:33.507079292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 01:12:36.377141 containerd[1543]: time="2025-08-13T01:12:36.377085561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:36.377987 containerd[1543]: time="2025-08-13T01:12:36.377956473Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 01:12:36.378509 containerd[1543]: time="2025-08-13T01:12:36.378481754Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:36.384527 containerd[1543]: time="2025-08-13T01:12:36.384250675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:36.386773 containerd[1543]: time="2025-08-13T01:12:36.386737310Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 2.879488467s" Aug 13 01:12:36.386773 containerd[1543]: time="2025-08-13T01:12:36.386767760Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 01:12:36.387644 containerd[1543]: time="2025-08-13T01:12:36.387623372Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 01:12:38.087221 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:12:38.125195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281971255.mount: Deactivated successfully. Aug 13 01:12:39.105892 containerd[1543]: time="2025-08-13T01:12:39.105231026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:39.105892 containerd[1543]: time="2025-08-13T01:12:39.105852117Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 01:12:39.106708 containerd[1543]: time="2025-08-13T01:12:39.106676809Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:39.108580 containerd[1543]: time="2025-08-13T01:12:39.108529413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:39.109797 containerd[1543]: time="2025-08-13T01:12:39.109752015Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 2.722096683s" Aug 13 01:12:39.109876 containerd[1543]: time="2025-08-13T01:12:39.109858855Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 01:12:39.110522 containerd[1543]: time="2025-08-13T01:12:39.110486746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:12:39.889827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236751816.mount: Deactivated successfully. Aug 13 01:12:41.423953 containerd[1543]: time="2025-08-13T01:12:41.423880282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:41.425750 containerd[1543]: time="2025-08-13T01:12:41.425724686Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:12:41.426211 containerd[1543]: time="2025-08-13T01:12:41.426188767Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:41.431376 containerd[1543]: time="2025-08-13T01:12:41.430586076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:41.432487 containerd[1543]: time="2025-08-13T01:12:41.432436469Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.321913422s" Aug 13 01:12:41.432487 containerd[1543]: time="2025-08-13T01:12:41.432472239Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:12:41.433397 containerd[1543]: time="2025-08-13T01:12:41.433215491Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:12:42.109620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 01:12:42.112698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:12:42.145642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2367172679.mount: Deactivated successfully. Aug 13 01:12:42.152655 containerd[1543]: time="2025-08-13T01:12:42.152336879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:12:42.154475 containerd[1543]: time="2025-08-13T01:12:42.154412063Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:12:42.155292 containerd[1543]: time="2025-08-13T01:12:42.155266125Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:12:42.158892 containerd[1543]: time="2025-08-13T01:12:42.158852942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:12:42.159565 containerd[1543]: time="2025-08-13T01:12:42.159527643Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 726.271242ms" Aug 13 01:12:42.159565 containerd[1543]: time="2025-08-13T01:12:42.159562933Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:12:42.160480 containerd[1543]: time="2025-08-13T01:12:42.160440815Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 01:12:42.324628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:12:42.337691 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:12:42.471020 kubelet[2207]: E0813 01:12:42.470950 2207 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:12:42.475323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:12:42.475561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:12:42.476090 systemd[1]: kubelet.service: Consumed 330ms CPU time, 108.5M memory peak. Aug 13 01:12:42.960977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787659009.mount: Deactivated successfully. Aug 13 01:12:45.034555 containerd[1543]: time="2025-08-13T01:12:45.034493232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:45.035923 containerd[1543]: time="2025-08-13T01:12:45.035682724Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 01:12:45.036427 containerd[1543]: time="2025-08-13T01:12:45.036399286Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:45.039302 containerd[1543]: time="2025-08-13T01:12:45.039271511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:45.040495 containerd[1543]: time="2025-08-13T01:12:45.040464364Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.879981829s" Aug 13 01:12:45.040547 containerd[1543]: time="2025-08-13T01:12:45.040502224Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:12:46.860671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:12:46.860828 systemd[1]: kubelet.service: Consumed 330ms CPU time, 108.5M memory peak. Aug 13 01:12:46.863879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:12:46.904155 systemd[1]: Reload requested from client PID 2296 ('systemctl') (unit session-9.scope)... Aug 13 01:12:46.904333 systemd[1]: Reloading... Aug 13 01:12:47.085402 zram_generator::config[2351]: No configuration found. Aug 13 01:12:47.184453 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:12:47.310487 systemd[1]: Reloading finished in 405 ms. Aug 13 01:12:47.378276 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:12:47.378398 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:12:47.378697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:12:47.378748 systemd[1]: kubelet.service: Consumed 145ms CPU time, 98.3M memory peak. Aug 13 01:12:47.380811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:12:47.632732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:12:47.641759 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:12:47.812924 kubelet[2393]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:12:47.812924 kubelet[2393]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:12:47.812924 kubelet[2393]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:12:47.813583 kubelet[2393]: I0813 01:12:47.812943 2393 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:12:48.305724 kubelet[2393]: I0813 01:12:48.305670 2393 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 01:12:48.305724 kubelet[2393]: I0813 01:12:48.305706 2393 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:12:48.306085 kubelet[2393]: I0813 01:12:48.306061 2393 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 01:12:48.329710 kubelet[2393]: E0813 01:12:48.329658 2393 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.199.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.199.8:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:12:48.330816 kubelet[2393]: I0813 01:12:48.330686 2393 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:12:48.342402 kubelet[2393]: I0813 01:12:48.342347 2393 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:12:48.349007 kubelet[2393]: I0813 01:12:48.348712 2393 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:12:48.351514 kubelet[2393]: I0813 01:12:48.351478 2393 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:12:48.351750 kubelet[2393]: I0813 01:12:48.351577 2393 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-199-8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:12:48.351911 kubelet[2393]: I0813 01:12:48.351898 2393 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:12:48.351969 kubelet[2393]: I0813 01:12:48.351960 2393 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 01:12:48.352416 kubelet[2393]: I0813 01:12:48.352404 2393 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:12:48.356803 kubelet[2393]: I0813 01:12:48.356667 2393 kubelet.go:446] "Attempting to sync node with API server" Aug 13 01:12:48.356803 kubelet[2393]: I0813 01:12:48.356706 2393 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:12:48.356803 kubelet[2393]: I0813 01:12:48.356740 2393 kubelet.go:352] "Adding apiserver pod source" Aug 13 01:12:48.356803 kubelet[2393]: I0813 01:12:48.356755 2393 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:12:48.386878 kubelet[2393]: W0813 01:12:48.386836 2393 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.199.8:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-199-8&limit=500&resourceVersion=0": dial tcp 172.234.199.8:6443: connect: connection refused Aug 13 01:12:48.387552 kubelet[2393]: E0813 01:12:48.387057 2393 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.199.8:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-199-8&limit=500&resourceVersion=0\": dial tcp 172.234.199.8:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:12:48.387552 kubelet[2393]: W0813 01:12:48.387481 2393 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.199.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.199.8:6443: connect: connection refused Aug 13 01:12:48.387552 kubelet[2393]: E0813 01:12:48.387517 2393 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.199.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.199.8:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:12:48.387916 kubelet[2393]: I0813 01:12:48.387898 2393 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:12:48.388406 kubelet[2393]: I0813 01:12:48.388393 2393 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:12:48.389187 kubelet[2393]: W0813 01:12:48.389171 2393 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:12:48.392267 kubelet[2393]: I0813 01:12:48.392250 2393 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:12:48.392371 kubelet[2393]: I0813 01:12:48.392341 2393 server.go:1287] "Started kubelet" Aug 13 01:12:48.395392 kubelet[2393]: I0813 01:12:48.394749 2393 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:12:48.396474 kubelet[2393]: I0813 01:12:48.395893 2393 server.go:479] "Adding debug handlers to kubelet server" Aug 13 01:12:48.464396 kubelet[2393]: I0813 01:12:48.427551 2393 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:12:48.468425 kubelet[2393]: I0813 01:12:48.468398 2393 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:12:48.471965 kubelet[2393]: I0813 01:12:48.471252 2393 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:12:48.471965 kubelet[2393]: E0813 01:12:48.471559 2393 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-199-8\" not found" Aug 13 01:12:48.476027 kubelet[2393]: E0813 01:12:48.472847 2393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.199.8:6443/api/v1/namespaces/default/events\": dial tcp 172.234.199.8:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-199-8.185b2e76fdefe808 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-199-8,UID:172-234-199-8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-199-8,},FirstTimestamp:2025-08-13 01:12:48.392316936 +0000 UTC m=+0.745074761,LastTimestamp:2025-08-13 01:12:48.392316936 +0000 UTC m=+0.745074761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-199-8,}" Aug 13 01:12:48.476027 kubelet[2393]: E0813 01:12:48.475166 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.199.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-199-8?timeout=10s\": dial tcp 172.234.199.8:6443: connect: connection refused" interval="200ms" Aug 13 01:12:48.476027 kubelet[2393]: I0813 01:12:48.475197 2393 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:12:48.476675 kubelet[2393]: I0813 01:12:48.476642 2393 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:12:48.477815 kubelet[2393]: I0813 01:12:48.477338 2393 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:12:48.477922 kubelet[2393]: I0813 01:12:48.477893 2393 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:12:48.479989 kubelet[2393]: I0813 01:12:48.479970 2393 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:12:48.480448 kubelet[2393]: I0813 01:12:48.480434 2393 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:12:48.488065 kubelet[2393]: W0813 01:12:48.488011 2393 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.199.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.199.8:6443: connect: connection refused Aug 13 01:12:48.488283 kubelet[2393]: E0813 01:12:48.488238 2393 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.199.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.199.8:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:12:48.498651 kubelet[2393]: I0813 01:12:48.496862 2393 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:12:48.520818 kubelet[2393]: E0813 01:12:48.520779 2393 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:12:48.523726 kubelet[2393]: I0813 01:12:48.523688 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:12:48.536106 kubelet[2393]: I0813 01:12:48.536059 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:12:48.536272 kubelet[2393]: I0813 01:12:48.536262 2393 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 01:12:48.536338 kubelet[2393]: I0813 01:12:48.536326 2393 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:12:48.536432 kubelet[2393]: I0813 01:12:48.536422 2393 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 01:12:48.536577 kubelet[2393]: E0813 01:12:48.536557 2393 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:12:48.540344 kubelet[2393]: W0813 01:12:48.540306 2393 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.199.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.199.8:6443: connect: connection refused Aug 13 01:12:48.540585 kubelet[2393]: E0813 01:12:48.540526 2393 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.199.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.199.8:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:12:48.549760 kubelet[2393]: I0813 01:12:48.549738 2393 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:12:48.550072 kubelet[2393]: I0813 01:12:48.549923 2393 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:12:48.550072 kubelet[2393]: I0813 01:12:48.549946 2393 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:12:48.558638 kubelet[2393]: I0813 01:12:48.558508 2393 policy_none.go:49] "None policy: Start" Aug 13 01:12:48.558638 kubelet[2393]: I0813 01:12:48.558561 2393 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:12:48.558638 kubelet[2393]: I0813 01:12:48.558577 2393 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:12:48.568735 kubelet[2393]: E0813 01:12:48.568594 2393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.199.8:6443/api/v1/namespaces/default/events\": dial tcp 172.234.199.8:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-199-8.185b2e76fdefe808 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-199-8,UID:172-234-199-8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-199-8,},FirstTimestamp:2025-08-13 01:12:48.392316936 +0000 UTC m=+0.745074761,LastTimestamp:2025-08-13 01:12:48.392316936 +0000 UTC m=+0.745074761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-199-8,}" Aug 13 01:12:48.572579 kubelet[2393]: E0813 01:12:48.572548 2393 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-199-8\" not found" Aug 13 01:12:48.577791 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:12:48.593721 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:12:48.598241 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:12:48.610014 kubelet[2393]: I0813 01:12:48.609750 2393 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:12:48.611399 kubelet[2393]: I0813 01:12:48.610812 2393 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:12:48.611399 kubelet[2393]: I0813 01:12:48.610828 2393 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:12:48.611399 kubelet[2393]: I0813 01:12:48.611343 2393 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:12:48.614691 kubelet[2393]: E0813 01:12:48.613954 2393 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:12:48.615337 kubelet[2393]: E0813 01:12:48.615321 2393 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-199-8\" not found" Aug 13 01:12:48.648601 systemd[1]: Created slice kubepods-burstable-pod2aaf346e9711f28d5457dd4106439e3a.slice - libcontainer container kubepods-burstable-pod2aaf346e9711f28d5457dd4106439e3a.slice. Aug 13 01:12:48.657372 kubelet[2393]: E0813 01:12:48.657320 2393 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:48.661249 systemd[1]: Created slice kubepods-burstable-podd88e1f2f1a2594d7a178f97575c600e2.slice - libcontainer container kubepods-burstable-podd88e1f2f1a2594d7a178f97575c600e2.slice. Aug 13 01:12:48.663689 kubelet[2393]: E0813 01:12:48.663633 2393 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:48.664539 update_engine[1517]: I20250813 01:12:48.664436 1517 update_attempter.cc:509] Updating boot flags... Aug 13 01:12:48.668321 systemd[1]: Created slice kubepods-burstable-podd20453b778ac7f92cd11ee60c0ce748c.slice - libcontainer container kubepods-burstable-podd20453b778ac7f92cd11ee60c0ce748c.slice. Aug 13 01:12:48.672530 kubelet[2393]: E0813 01:12:48.672502 2393 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:48.675950 kubelet[2393]: E0813 01:12:48.675921 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.199.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-199-8?timeout=10s\": dial tcp 172.234.199.8:6443: connect: connection refused" interval="400ms" Aug 13 01:12:48.684467 kubelet[2393]: I0813 01:12:48.684445 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d20453b778ac7f92cd11ee60c0ce748c-flexvolume-dir\") pod \"kube-controller-manager-172-234-199-8\" (UID: \"d20453b778ac7f92cd11ee60c0ce748c\") " pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:48.684605 kubelet[2393]: I0813 01:12:48.684577 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d20453b778ac7f92cd11ee60c0ce748c-kubeconfig\") pod \"kube-controller-manager-172-234-199-8\" (UID: \"d20453b778ac7f92cd11ee60c0ce748c\") " pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:48.684747 kubelet[2393]: I0813 01:12:48.684695 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2aaf346e9711f28d5457dd4106439e3a-kubeconfig\") pod \"kube-scheduler-172-234-199-8\" (UID: \"2aaf346e9711f28d5457dd4106439e3a\") " pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:12:48.684820 kubelet[2393]: I0813 01:12:48.684725 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d88e1f2f1a2594d7a178f97575c600e2-k8s-certs\") pod \"kube-apiserver-172-234-199-8\" (UID: \"d88e1f2f1a2594d7a178f97575c600e2\") " pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:48.702484 kubelet[2393]: I0813 01:12:48.684880 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d88e1f2f1a2594d7a178f97575c600e2-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-199-8\" (UID: \"d88e1f2f1a2594d7a178f97575c600e2\") " pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:48.702484 kubelet[2393]: I0813 01:12:48.684899 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d20453b778ac7f92cd11ee60c0ce748c-ca-certs\") pod \"kube-controller-manager-172-234-199-8\" (UID: \"d20453b778ac7f92cd11ee60c0ce748c\") " pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:48.702484 kubelet[2393]: I0813 01:12:48.684913 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d20453b778ac7f92cd11ee60c0ce748c-k8s-certs\") pod \"kube-controller-manager-172-234-199-8\" (UID: \"d20453b778ac7f92cd11ee60c0ce748c\") " pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:48.702484 kubelet[2393]: I0813 01:12:48.684933 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d20453b778ac7f92cd11ee60c0ce748c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-199-8\" (UID: \"d20453b778ac7f92cd11ee60c0ce748c\") " pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:48.703782 kubelet[2393]: I0813 01:12:48.684959 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d88e1f2f1a2594d7a178f97575c600e2-ca-certs\") pod \"kube-apiserver-172-234-199-8\" (UID: \"d88e1f2f1a2594d7a178f97575c600e2\") " pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:48.715337 kubelet[2393]: I0813 01:12:48.714968 2393 kubelet_node_status.go:75] "Attempting to register node" node="172-234-199-8" Aug 13 01:12:48.715600 kubelet[2393]: E0813 01:12:48.715580 2393 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.199.8:6443/api/v1/nodes\": dial tcp 172.234.199.8:6443: connect: connection refused" node="172-234-199-8" Aug 13 01:12:48.937275 kubelet[2393]: I0813 01:12:48.934701 2393 kubelet_node_status.go:75] "Attempting to register node" node="172-234-199-8" Aug 13 01:12:48.937275 kubelet[2393]: E0813 01:12:48.935454 2393 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.199.8:6443/api/v1/nodes\": dial tcp 172.234.199.8:6443: connect: connection refused" node="172-234-199-8" Aug 13 01:12:48.959398 kubelet[2393]: E0813 01:12:48.958793 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:48.962631 containerd[1543]: time="2025-08-13T01:12:48.962552006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-199-8,Uid:2aaf346e9711f28d5457dd4106439e3a,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:48.964898 kubelet[2393]: E0813 01:12:48.964856 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:48.966803 containerd[1543]: time="2025-08-13T01:12:48.966535794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-199-8,Uid:d88e1f2f1a2594d7a178f97575c600e2,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:48.976255 kubelet[2393]: E0813 01:12:48.975734 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:48.976391 containerd[1543]: time="2025-08-13T01:12:48.976296244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-199-8,Uid:d20453b778ac7f92cd11ee60c0ce748c,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:49.106387 kubelet[2393]: E0813 01:12:49.104290 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.199.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-199-8?timeout=10s\": dial tcp 172.234.199.8:6443: connect: connection refused" interval="800ms" Aug 13 01:12:49.180461 containerd[1543]: time="2025-08-13T01:12:49.171178983Z" level=info msg="connecting to shim acc7fc58a90b7bd17f67828ad2abe2bb67b4ae0a822b9e96330d60e5333d262f" address="unix:///run/containerd/s/cca82a56ff5883e8f0379eb96d690a0151ccf0df8b0baa370dabf2d6afcbaef5" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:12:49.224099 kubelet[2393]: W0813 01:12:49.223866 2393 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.199.8:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-199-8&limit=500&resourceVersion=0": dial tcp 172.234.199.8:6443: connect: connection refused Aug 13 01:12:49.224099 kubelet[2393]: E0813 01:12:49.223954 2393 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.199.8:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-199-8&limit=500&resourceVersion=0\": dial tcp 172.234.199.8:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:12:49.228664 containerd[1543]: time="2025-08-13T01:12:49.228611808Z" level=info msg="connecting to shim 1e8aab97973514819ffad2f4a548cea310551cd45e5e6e946426f5f8980df490" address="unix:///run/containerd/s/1ae9d164e124e38def9ddca18e1943ccec71c81f64710c322740a3031e866d7c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:12:49.252777 containerd[1543]: time="2025-08-13T01:12:49.252690466Z" level=info msg="connecting to shim f6bfc190b7a29348428112244dd81b24cea4c042f5d77c7702be12bca296add7" address="unix:///run/containerd/s/49ea2fb6316edf2fc8272b6dca9f75dd081c47250b05b56d81d3801f969a112b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:12:49.307531 systemd[1]: Started cri-containerd-acc7fc58a90b7bd17f67828ad2abe2bb67b4ae0a822b9e96330d60e5333d262f.scope - libcontainer container acc7fc58a90b7bd17f67828ad2abe2bb67b4ae0a822b9e96330d60e5333d262f. Aug 13 01:12:49.349306 systemd[1]: Started cri-containerd-1e8aab97973514819ffad2f4a548cea310551cd45e5e6e946426f5f8980df490.scope - libcontainer container 1e8aab97973514819ffad2f4a548cea310551cd45e5e6e946426f5f8980df490. Aug 13 01:12:49.353524 kubelet[2393]: I0813 01:12:49.353408 2393 kubelet_node_status.go:75] "Attempting to register node" node="172-234-199-8" Aug 13 01:12:49.356107 kubelet[2393]: E0813 01:12:49.356040 2393 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.199.8:6443/api/v1/nodes\": dial tcp 172.234.199.8:6443: connect: connection refused" node="172-234-199-8" Aug 13 01:12:49.406612 systemd[1]: Started cri-containerd-f6bfc190b7a29348428112244dd81b24cea4c042f5d77c7702be12bca296add7.scope - libcontainer container f6bfc190b7a29348428112244dd81b24cea4c042f5d77c7702be12bca296add7. Aug 13 01:12:49.483157 kubelet[2393]: W0813 01:12:49.482995 2393 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.199.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.199.8:6443: connect: connection refused Aug 13 01:12:49.483157 kubelet[2393]: E0813 01:12:49.483085 2393 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.199.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.199.8:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:12:49.489793 kubelet[2393]: W0813 01:12:49.489731 2393 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.199.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.199.8:6443: connect: connection refused Aug 13 01:12:49.489793 kubelet[2393]: E0813 01:12:49.489791 2393 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.199.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.199.8:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:12:49.551872 containerd[1543]: time="2025-08-13T01:12:49.551815284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-199-8,Uid:d88e1f2f1a2594d7a178f97575c600e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"acc7fc58a90b7bd17f67828ad2abe2bb67b4ae0a822b9e96330d60e5333d262f\"" Aug 13 01:12:49.553637 kubelet[2393]: E0813 01:12:49.553591 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:49.558975 containerd[1543]: time="2025-08-13T01:12:49.558921729Z" level=info msg="CreateContainer within sandbox \"acc7fc58a90b7bd17f67828ad2abe2bb67b4ae0a822b9e96330d60e5333d262f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:12:49.572842 containerd[1543]: time="2025-08-13T01:12:49.572657996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-199-8,Uid:2aaf346e9711f28d5457dd4106439e3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6bfc190b7a29348428112244dd81b24cea4c042f5d77c7702be12bca296add7\"" Aug 13 01:12:49.575892 kubelet[2393]: E0813 01:12:49.575859 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:49.577709 containerd[1543]: time="2025-08-13T01:12:49.577610096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-199-8,Uid:d20453b778ac7f92cd11ee60c0ce748c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e8aab97973514819ffad2f4a548cea310551cd45e5e6e946426f5f8980df490\"" Aug 13 01:12:49.581383 kubelet[2393]: E0813 01:12:49.580629 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:49.582071 containerd[1543]: time="2025-08-13T01:12:49.581792054Z" level=info msg="CreateContainer within sandbox \"f6bfc190b7a29348428112244dd81b24cea4c042f5d77c7702be12bca296add7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:12:49.583085 containerd[1543]: time="2025-08-13T01:12:49.583040637Z" level=info msg="Container b70970d2f0d0a691418a4195bbe657374f8556aa822a6eb22c170f056522a87d: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:12:49.584501 containerd[1543]: time="2025-08-13T01:12:49.584462790Z" level=info msg="CreateContainer within sandbox \"1e8aab97973514819ffad2f4a548cea310551cd45e5e6e946426f5f8980df490\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:12:49.594320 containerd[1543]: time="2025-08-13T01:12:49.594246869Z" level=info msg="CreateContainer within sandbox \"acc7fc58a90b7bd17f67828ad2abe2bb67b4ae0a822b9e96330d60e5333d262f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b70970d2f0d0a691418a4195bbe657374f8556aa822a6eb22c170f056522a87d\"" Aug 13 01:12:49.596314 containerd[1543]: time="2025-08-13T01:12:49.595528942Z" level=info msg="Container 7f3051528cc47aaf300403510c5dac8cb92f90f288d474ee4db176d129a634c7: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:12:49.596845 containerd[1543]: time="2025-08-13T01:12:49.596802254Z" level=info msg="StartContainer for \"b70970d2f0d0a691418a4195bbe657374f8556aa822a6eb22c170f056522a87d\"" Aug 13 01:12:49.598216 containerd[1543]: time="2025-08-13T01:12:49.598165227Z" level=info msg="Container d8ebefa6922e075a29a54fa3cf0ce2c421f3b0a923909c4874a936bf44ed6e71: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:12:49.600159 containerd[1543]: time="2025-08-13T01:12:49.600100021Z" level=info msg="connecting to shim b70970d2f0d0a691418a4195bbe657374f8556aa822a6eb22c170f056522a87d" address="unix:///run/containerd/s/cca82a56ff5883e8f0379eb96d690a0151ccf0df8b0baa370dabf2d6afcbaef5" protocol=ttrpc version=3 Aug 13 01:12:49.604365 containerd[1543]: time="2025-08-13T01:12:49.604308819Z" level=info msg="CreateContainer within sandbox \"1e8aab97973514819ffad2f4a548cea310551cd45e5e6e946426f5f8980df490\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7f3051528cc47aaf300403510c5dac8cb92f90f288d474ee4db176d129a634c7\"" Aug 13 01:12:49.604950 containerd[1543]: time="2025-08-13T01:12:49.604915011Z" level=info msg="StartContainer for \"7f3051528cc47aaf300403510c5dac8cb92f90f288d474ee4db176d129a634c7\"" Aug 13 01:12:49.606589 containerd[1543]: time="2025-08-13T01:12:49.606550484Z" level=info msg="CreateContainer within sandbox \"f6bfc190b7a29348428112244dd81b24cea4c042f5d77c7702be12bca296add7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d8ebefa6922e075a29a54fa3cf0ce2c421f3b0a923909c4874a936bf44ed6e71\"" Aug 13 01:12:49.607918 containerd[1543]: time="2025-08-13T01:12:49.607692806Z" level=info msg="StartContainer for \"d8ebefa6922e075a29a54fa3cf0ce2c421f3b0a923909c4874a936bf44ed6e71\"" Aug 13 01:12:49.608410 containerd[1543]: time="2025-08-13T01:12:49.608324767Z" level=info msg="connecting to shim 7f3051528cc47aaf300403510c5dac8cb92f90f288d474ee4db176d129a634c7" address="unix:///run/containerd/s/1ae9d164e124e38def9ddca18e1943ccec71c81f64710c322740a3031e866d7c" protocol=ttrpc version=3 Aug 13 01:12:49.612821 containerd[1543]: time="2025-08-13T01:12:49.612778116Z" level=info msg="connecting to shim d8ebefa6922e075a29a54fa3cf0ce2c421f3b0a923909c4874a936bf44ed6e71" address="unix:///run/containerd/s/49ea2fb6316edf2fc8272b6dca9f75dd081c47250b05b56d81d3801f969a112b" protocol=ttrpc version=3 Aug 13 01:12:49.633652 systemd[1]: Started cri-containerd-b70970d2f0d0a691418a4195bbe657374f8556aa822a6eb22c170f056522a87d.scope - libcontainer container b70970d2f0d0a691418a4195bbe657374f8556aa822a6eb22c170f056522a87d. Aug 13 01:12:49.660617 systemd[1]: Started cri-containerd-7f3051528cc47aaf300403510c5dac8cb92f90f288d474ee4db176d129a634c7.scope - libcontainer container 7f3051528cc47aaf300403510c5dac8cb92f90f288d474ee4db176d129a634c7. Aug 13 01:12:49.677913 systemd[1]: Started cri-containerd-d8ebefa6922e075a29a54fa3cf0ce2c421f3b0a923909c4874a936bf44ed6e71.scope - libcontainer container d8ebefa6922e075a29a54fa3cf0ce2c421f3b0a923909c4874a936bf44ed6e71. Aug 13 01:12:49.829258 containerd[1543]: time="2025-08-13T01:12:49.807669176Z" level=info msg="StartContainer for \"d8ebefa6922e075a29a54fa3cf0ce2c421f3b0a923909c4874a936bf44ed6e71\" returns successfully" Aug 13 01:12:49.835713 containerd[1543]: time="2025-08-13T01:12:49.835634232Z" level=info msg="StartContainer for \"b70970d2f0d0a691418a4195bbe657374f8556aa822a6eb22c170f056522a87d\" returns successfully" Aug 13 01:12:49.867143 containerd[1543]: time="2025-08-13T01:12:49.867061965Z" level=info msg="StartContainer for \"7f3051528cc47aaf300403510c5dac8cb92f90f288d474ee4db176d129a634c7\" returns successfully" Aug 13 01:12:49.896737 kubelet[2393]: W0813 01:12:49.895315 2393 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.199.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.199.8:6443: connect: connection refused Aug 13 01:12:49.896737 kubelet[2393]: E0813 01:12:49.896745 2393 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.199.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.199.8:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:12:50.161374 kubelet[2393]: I0813 01:12:50.160321 2393 kubelet_node_status.go:75] "Attempting to register node" node="172-234-199-8" Aug 13 01:12:50.560629 kubelet[2393]: E0813 01:12:50.560142 2393 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:50.560629 kubelet[2393]: E0813 01:12:50.560279 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:50.563338 kubelet[2393]: E0813 01:12:50.563054 2393 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:50.563338 kubelet[2393]: E0813 01:12:50.563223 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:50.565654 kubelet[2393]: E0813 01:12:50.565638 2393 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:50.567397 kubelet[2393]: E0813 01:12:50.565904 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:51.591657 kubelet[2393]: E0813 01:12:51.591602 2393 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:51.592300 kubelet[2393]: E0813 01:12:51.591815 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:51.592940 kubelet[2393]: E0813 01:12:51.592855 2393 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:51.593110 kubelet[2393]: E0813 01:12:51.593061 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:52.613854 kubelet[2393]: E0813 01:12:52.613789 2393 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:52.614527 kubelet[2393]: E0813 01:12:52.614091 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:53.581410 kubelet[2393]: E0813 01:12:53.581287 2393 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:53.581899 kubelet[2393]: E0813 01:12:53.581856 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:53.867196 kubelet[2393]: E0813 01:12:53.866643 2393 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-199-8\" not found" node="172-234-199-8" Aug 13 01:12:54.007518 kubelet[2393]: I0813 01:12:54.007409 2393 kubelet_node_status.go:78] "Successfully registered node" node="172-234-199-8" Aug 13 01:12:54.007518 kubelet[2393]: E0813 01:12:54.007470 2393 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-234-199-8\": node \"172-234-199-8\" not found" Aug 13 01:12:54.080443 kubelet[2393]: I0813 01:12:54.072772 2393 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:12:54.125554 kubelet[2393]: E0813 01:12:54.115157 2393 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-199-8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:12:54.125554 kubelet[2393]: I0813 01:12:54.115241 2393 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:54.125554 kubelet[2393]: E0813 01:12:54.120590 2393 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-199-8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:54.125554 kubelet[2393]: I0813 01:12:54.120628 2393 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:54.125554 kubelet[2393]: E0813 01:12:54.125236 2393 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-199-8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:54.400058 kubelet[2393]: I0813 01:12:54.399704 2393 apiserver.go:52] "Watching apiserver" Aug 13 01:12:54.481266 kubelet[2393]: I0813 01:12:54.481188 2393 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:12:55.672963 systemd[1]: Reload requested from client PID 2680 ('systemctl') (unit session-9.scope)... Aug 13 01:12:55.672984 systemd[1]: Reloading... Aug 13 01:12:55.812445 zram_generator::config[2719]: No configuration found. Aug 13 01:12:55.943505 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:12:56.084919 systemd[1]: Reloading finished in 411 ms. Aug 13 01:12:56.117551 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:12:56.137906 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:12:56.138444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:12:56.138510 systemd[1]: kubelet.service: Consumed 1.561s CPU time, 131.5M memory peak. Aug 13 01:12:56.144452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:12:56.341531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:12:56.355615 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:12:56.420162 kubelet[2774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:12:56.421252 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:12:56.421252 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:12:56.421252 kubelet[2774]: I0813 01:12:56.420910 2774 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:12:56.429505 kubelet[2774]: I0813 01:12:56.429463 2774 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 01:12:56.429505 kubelet[2774]: I0813 01:12:56.429492 2774 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:12:56.429750 kubelet[2774]: I0813 01:12:56.429733 2774 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 01:12:56.431670 kubelet[2774]: I0813 01:12:56.431645 2774 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:12:56.434935 kubelet[2774]: I0813 01:12:56.434498 2774 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:12:56.440974 kubelet[2774]: I0813 01:12:56.440944 2774 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:12:56.454071 kubelet[2774]: I0813 01:12:56.454042 2774 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:12:56.454510 kubelet[2774]: I0813 01:12:56.454483 2774 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:12:56.454717 kubelet[2774]: I0813 01:12:56.454573 2774 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-199-8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:12:56.454839 kubelet[2774]: I0813 01:12:56.454825 2774 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:12:56.454896 kubelet[2774]: I0813 01:12:56.454887 2774 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 01:12:56.454984 kubelet[2774]: I0813 01:12:56.454975 2774 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:12:56.455213 kubelet[2774]: I0813 01:12:56.455199 2774 kubelet.go:446] "Attempting to sync node with API server" Aug 13 01:12:56.455290 kubelet[2774]: I0813 01:12:56.455279 2774 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:12:56.455418 kubelet[2774]: I0813 01:12:56.455407 2774 kubelet.go:352] "Adding apiserver pod source" Aug 13 01:12:56.456197 kubelet[2774]: I0813 01:12:56.455481 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:12:56.456721 kubelet[2774]: I0813 01:12:56.456696 2774 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:12:56.457190 kubelet[2774]: I0813 01:12:56.457168 2774 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:12:56.457908 kubelet[2774]: I0813 01:12:56.457887 2774 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:12:56.457950 kubelet[2774]: I0813 01:12:56.457925 2774 server.go:1287] "Started kubelet" Aug 13 01:12:56.466023 kubelet[2774]: I0813 01:12:56.465984 2774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:12:56.466648 kubelet[2774]: I0813 01:12:56.466634 2774 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:12:56.466752 kubelet[2774]: I0813 01:12:56.466736 2774 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:12:56.467954 kubelet[2774]: I0813 01:12:56.467936 2774 server.go:479] "Adding debug handlers to kubelet server" Aug 13 01:12:56.468760 kubelet[2774]: I0813 01:12:56.468736 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:12:56.471806 kubelet[2774]: E0813 01:12:56.471791 2774 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:12:56.472888 kubelet[2774]: I0813 01:12:56.472874 2774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:12:56.481150 kubelet[2774]: I0813 01:12:56.481111 2774 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:12:56.481435 kubelet[2774]: E0813 01:12:56.481412 2774 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-199-8\" not found" Aug 13 01:12:56.481668 kubelet[2774]: I0813 01:12:56.481651 2774 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:12:56.481819 kubelet[2774]: I0813 01:12:56.481803 2774 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:12:56.485448 kubelet[2774]: I0813 01:12:56.485431 2774 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:12:56.485656 kubelet[2774]: I0813 01:12:56.485638 2774 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:12:56.488644 kubelet[2774]: I0813 01:12:56.488629 2774 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:12:56.501102 kubelet[2774]: I0813 01:12:56.501047 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:12:56.503035 kubelet[2774]: I0813 01:12:56.502697 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:12:56.503035 kubelet[2774]: I0813 01:12:56.502728 2774 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 01:12:56.503035 kubelet[2774]: I0813 01:12:56.502753 2774 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:12:56.503035 kubelet[2774]: I0813 01:12:56.502763 2774 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 01:12:56.503035 kubelet[2774]: E0813 01:12:56.502829 2774 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:12:56.559942 kubelet[2774]: I0813 01:12:56.557842 2774 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:12:56.559942 kubelet[2774]: I0813 01:12:56.557866 2774 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:12:56.559942 kubelet[2774]: I0813 01:12:56.557937 2774 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:12:56.559942 kubelet[2774]: I0813 01:12:56.558232 2774 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:12:56.559942 kubelet[2774]: I0813 01:12:56.558242 2774 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:12:56.559942 kubelet[2774]: I0813 01:12:56.558262 2774 policy_none.go:49] "None policy: Start" Aug 13 01:12:56.559942 kubelet[2774]: I0813 01:12:56.558279 2774 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:12:56.559942 kubelet[2774]: I0813 01:12:56.558288 2774 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:12:56.559942 kubelet[2774]: I0813 01:12:56.558399 2774 state_mem.go:75] "Updated machine memory state" Aug 13 01:12:56.566004 kubelet[2774]: I0813 01:12:56.565567 2774 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:12:56.566004 kubelet[2774]: I0813 01:12:56.565737 2774 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:12:56.566004 kubelet[2774]: I0813 01:12:56.565748 2774 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:12:56.567185 kubelet[2774]: I0813 01:12:56.567172 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:12:56.571079 kubelet[2774]: E0813 01:12:56.570543 2774 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:12:56.604183 kubelet[2774]: I0813 01:12:56.604058 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:56.605279 kubelet[2774]: I0813 01:12:56.604788 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:12:56.605554 kubelet[2774]: I0813 01:12:56.605540 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:56.670807 kubelet[2774]: I0813 01:12:56.670742 2774 kubelet_node_status.go:75] "Attempting to register node" node="172-234-199-8" Aug 13 01:12:56.682035 kubelet[2774]: I0813 01:12:56.682005 2774 kubelet_node_status.go:124] "Node was previously registered" node="172-234-199-8" Aug 13 01:12:56.682222 kubelet[2774]: I0813 01:12:56.682132 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d20453b778ac7f92cd11ee60c0ce748c-ca-certs\") pod \"kube-controller-manager-172-234-199-8\" (UID: \"d20453b778ac7f92cd11ee60c0ce748c\") " pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:56.682222 kubelet[2774]: I0813 01:12:56.682157 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d20453b778ac7f92cd11ee60c0ce748c-kubeconfig\") pod \"kube-controller-manager-172-234-199-8\" (UID: \"d20453b778ac7f92cd11ee60c0ce748c\") " pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:56.682222 kubelet[2774]: I0813 01:12:56.682174 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2aaf346e9711f28d5457dd4106439e3a-kubeconfig\") pod \"kube-scheduler-172-234-199-8\" (UID: \"2aaf346e9711f28d5457dd4106439e3a\") " pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:12:56.682222 kubelet[2774]: I0813 01:12:56.682195 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d88e1f2f1a2594d7a178f97575c600e2-ca-certs\") pod \"kube-apiserver-172-234-199-8\" (UID: \"d88e1f2f1a2594d7a178f97575c600e2\") " pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:56.682912 kubelet[2774]: I0813 01:12:56.682217 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d20453b778ac7f92cd11ee60c0ce748c-flexvolume-dir\") pod \"kube-controller-manager-172-234-199-8\" (UID: \"d20453b778ac7f92cd11ee60c0ce748c\") " pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:56.682912 kubelet[2774]: I0813 01:12:56.682497 2774 kubelet_node_status.go:78] "Successfully registered node" node="172-234-199-8" Aug 13 01:12:56.682912 kubelet[2774]: I0813 01:12:56.682497 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d20453b778ac7f92cd11ee60c0ce748c-k8s-certs\") pod \"kube-controller-manager-172-234-199-8\" (UID: \"d20453b778ac7f92cd11ee60c0ce748c\") " pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:56.682912 kubelet[2774]: I0813 01:12:56.682516 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d20453b778ac7f92cd11ee60c0ce748c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-199-8\" (UID: \"d20453b778ac7f92cd11ee60c0ce748c\") " pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:12:56.682912 kubelet[2774]: I0813 01:12:56.682529 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d88e1f2f1a2594d7a178f97575c600e2-k8s-certs\") pod \"kube-apiserver-172-234-199-8\" (UID: \"d88e1f2f1a2594d7a178f97575c600e2\") " pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:56.682912 kubelet[2774]: I0813 01:12:56.682544 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d88e1f2f1a2594d7a178f97575c600e2-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-199-8\" (UID: \"d88e1f2f1a2594d7a178f97575c600e2\") " pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:56.913302 kubelet[2774]: E0813 01:12:56.912978 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:56.913700 kubelet[2774]: E0813 01:12:56.913647 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:56.914406 kubelet[2774]: E0813 01:12:56.913848 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:57.475994 kubelet[2774]: I0813 01:12:57.462102 2774 apiserver.go:52] "Watching apiserver" Aug 13 01:12:57.482597 kubelet[2774]: I0813 01:12:57.482469 2774 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:12:57.531378 kubelet[2774]: E0813 01:12:57.530539 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:57.532091 kubelet[2774]: I0813 01:12:57.531981 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:12:57.533756 kubelet[2774]: I0813 01:12:57.533740 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:57.548014 kubelet[2774]: E0813 01:12:57.547985 2774 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-199-8\" already exists" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:12:57.549428 kubelet[2774]: E0813 01:12:57.549053 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:57.550175 kubelet[2774]: E0813 01:12:57.550113 2774 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-199-8\" already exists" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:12:57.551692 kubelet[2774]: E0813 01:12:57.551678 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:57.600003 kubelet[2774]: I0813 01:12:57.599917 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-199-8" podStartSLOduration=1.599872907 podStartE2EDuration="1.599872907s" podCreationTimestamp="2025-08-13 01:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:12:57.591881911 +0000 UTC m=+1.229557340" watchObservedRunningTime="2025-08-13 01:12:57.599872907 +0000 UTC m=+1.237548366" Aug 13 01:12:57.600593 kubelet[2774]: I0813 01:12:57.600512 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-199-8" podStartSLOduration=1.600501608 podStartE2EDuration="1.600501608s" podCreationTimestamp="2025-08-13 01:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:12:57.599443596 +0000 UTC m=+1.237119025" watchObservedRunningTime="2025-08-13 01:12:57.600501608 +0000 UTC m=+1.238177037" Aug 13 01:12:58.532569 kubelet[2774]: E0813 01:12:58.532263 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:58.532569 kubelet[2774]: E0813 01:12:58.532268 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:59.246382 kubelet[2774]: E0813 01:12:59.244755 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:59.260306 kubelet[2774]: I0813 01:12:59.260213 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-199-8" podStartSLOduration=3.260190558 podStartE2EDuration="3.260190558s" podCreationTimestamp="2025-08-13 01:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:12:57.609492096 +0000 UTC m=+1.247167525" watchObservedRunningTime="2025-08-13 01:12:59.260190558 +0000 UTC m=+2.897865987" Aug 13 01:12:59.534934 kubelet[2774]: E0813 01:12:59.534021 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:12:59.535679 kubelet[2774]: E0813 01:12:59.535561 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:00.536012 kubelet[2774]: E0813 01:13:00.535936 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:00.795003 kubelet[2774]: I0813 01:13:00.794469 2774 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:13:00.796033 containerd[1543]: time="2025-08-13T01:13:00.795917775Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:13:00.796930 kubelet[2774]: I0813 01:13:00.796578 2774 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:13:01.559052 systemd[1]: Created slice kubepods-besteffort-poddfb6b09b_3005_43ab_a2ab_7f9b9decd12f.slice - libcontainer container kubepods-besteffort-poddfb6b09b_3005_43ab_a2ab_7f9b9decd12f.slice. Aug 13 01:13:01.734028 kubelet[2774]: I0813 01:13:01.733801 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfb6b09b-3005-43ab-a2ab-7f9b9decd12f-kube-proxy\") pod \"kube-proxy-8p8zx\" (UID: \"dfb6b09b-3005-43ab-a2ab-7f9b9decd12f\") " pod="kube-system/kube-proxy-8p8zx" Aug 13 01:13:01.734028 kubelet[2774]: I0813 01:13:01.733837 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfb6b09b-3005-43ab-a2ab-7f9b9decd12f-lib-modules\") pod \"kube-proxy-8p8zx\" (UID: \"dfb6b09b-3005-43ab-a2ab-7f9b9decd12f\") " pod="kube-system/kube-proxy-8p8zx" Aug 13 01:13:01.734028 kubelet[2774]: I0813 01:13:01.733861 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfb6b09b-3005-43ab-a2ab-7f9b9decd12f-xtables-lock\") pod \"kube-proxy-8p8zx\" (UID: \"dfb6b09b-3005-43ab-a2ab-7f9b9decd12f\") " pod="kube-system/kube-proxy-8p8zx" Aug 13 01:13:01.734028 kubelet[2774]: I0813 01:13:01.733878 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pph2n\" (UniqueName: \"kubernetes.io/projected/dfb6b09b-3005-43ab-a2ab-7f9b9decd12f-kube-api-access-pph2n\") pod \"kube-proxy-8p8zx\" (UID: \"dfb6b09b-3005-43ab-a2ab-7f9b9decd12f\") " pod="kube-system/kube-proxy-8p8zx" Aug 13 01:13:01.866664 kubelet[2774]: E0813 01:13:01.866518 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:01.867936 containerd[1543]: time="2025-08-13T01:13:01.867869136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8p8zx,Uid:dfb6b09b-3005-43ab-a2ab-7f9b9decd12f,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:01.917634 systemd[1]: Created slice kubepods-besteffort-podfe383991_29fe_448b_9865_0e2a6fc84d2e.slice - libcontainer container kubepods-besteffort-podfe383991_29fe_448b_9865_0e2a6fc84d2e.slice. Aug 13 01:13:01.937316 kubelet[2774]: I0813 01:13:01.937171 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fe383991-29fe-448b-9865-0e2a6fc84d2e-var-lib-calico\") pod \"tigera-operator-747864d56d-2rkjv\" (UID: \"fe383991-29fe-448b-9865-0e2a6fc84d2e\") " pod="tigera-operator/tigera-operator-747864d56d-2rkjv" Aug 13 01:13:01.937316 kubelet[2774]: I0813 01:13:01.937243 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswll\" (UniqueName: \"kubernetes.io/projected/fe383991-29fe-448b-9865-0e2a6fc84d2e-kube-api-access-lswll\") pod \"tigera-operator-747864d56d-2rkjv\" (UID: \"fe383991-29fe-448b-9865-0e2a6fc84d2e\") " pod="tigera-operator/tigera-operator-747864d56d-2rkjv" Aug 13 01:13:01.961039 containerd[1543]: time="2025-08-13T01:13:01.960390999Z" level=info msg="connecting to shim 98927a38256f434f890a43c478c7a43ee72a46c6d8fb4bdcf50f2fa9fd49bd4e" address="unix:///run/containerd/s/30bd1a38ca5688b63f63578bc5173909f7a813ec01d3f9381d3185308d602a80" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:13:02.040696 systemd[1]: Started cri-containerd-98927a38256f434f890a43c478c7a43ee72a46c6d8fb4bdcf50f2fa9fd49bd4e.scope - libcontainer container 98927a38256f434f890a43c478c7a43ee72a46c6d8fb4bdcf50f2fa9fd49bd4e. Aug 13 01:13:02.127626 containerd[1543]: time="2025-08-13T01:13:02.127320729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8p8zx,Uid:dfb6b09b-3005-43ab-a2ab-7f9b9decd12f,Namespace:kube-system,Attempt:0,} returns sandbox id \"98927a38256f434f890a43c478c7a43ee72a46c6d8fb4bdcf50f2fa9fd49bd4e\"" Aug 13 01:13:02.128282 kubelet[2774]: E0813 01:13:02.128251 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:02.133110 containerd[1543]: time="2025-08-13T01:13:02.133046418Z" level=info msg="CreateContainer within sandbox \"98927a38256f434f890a43c478c7a43ee72a46c6d8fb4bdcf50f2fa9fd49bd4e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:13:02.151157 containerd[1543]: time="2025-08-13T01:13:02.151111643Z" level=info msg="Container f733e7f684c6a0d24905f1c7a0e36fb166521899490d2fcb7bbb63f35d5431e3: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:13:02.159068 containerd[1543]: time="2025-08-13T01:13:02.159013375Z" level=info msg="CreateContainer within sandbox \"98927a38256f434f890a43c478c7a43ee72a46c6d8fb4bdcf50f2fa9fd49bd4e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f733e7f684c6a0d24905f1c7a0e36fb166521899490d2fcb7bbb63f35d5431e3\"" Aug 13 01:13:02.161365 containerd[1543]: time="2025-08-13T01:13:02.160773583Z" level=info msg="StartContainer for \"f733e7f684c6a0d24905f1c7a0e36fb166521899490d2fcb7bbb63f35d5431e3\"" Aug 13 01:13:02.162431 containerd[1543]: time="2025-08-13T01:13:02.162410640Z" level=info msg="connecting to shim f733e7f684c6a0d24905f1c7a0e36fb166521899490d2fcb7bbb63f35d5431e3" address="unix:///run/containerd/s/30bd1a38ca5688b63f63578bc5173909f7a813ec01d3f9381d3185308d602a80" protocol=ttrpc version=3 Aug 13 01:13:02.201598 systemd[1]: Started cri-containerd-f733e7f684c6a0d24905f1c7a0e36fb166521899490d2fcb7bbb63f35d5431e3.scope - libcontainer container f733e7f684c6a0d24905f1c7a0e36fb166521899490d2fcb7bbb63f35d5431e3. Aug 13 01:13:02.224276 containerd[1543]: time="2025-08-13T01:13:02.224203935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2rkjv,Uid:fe383991-29fe-448b-9865-0e2a6fc84d2e,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:13:02.275587 containerd[1543]: time="2025-08-13T01:13:02.275527264Z" level=info msg="connecting to shim 402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77" address="unix:///run/containerd/s/3e58bc29b33b5667a59122e74ad33b3964d3118493530bbaddcad3d10e6cb380" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:13:02.312887 containerd[1543]: time="2025-08-13T01:13:02.312842318Z" level=info msg="StartContainer for \"f733e7f684c6a0d24905f1c7a0e36fb166521899490d2fcb7bbb63f35d5431e3\" returns successfully" Aug 13 01:13:02.332580 systemd[1]: Started cri-containerd-402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77.scope - libcontainer container 402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77. Aug 13 01:13:02.444730 containerd[1543]: time="2025-08-13T01:13:02.444674793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2rkjv,Uid:fe383991-29fe-448b-9865-0e2a6fc84d2e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\"" Aug 13 01:13:02.448657 containerd[1543]: time="2025-08-13T01:13:02.448586434Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:13:02.543647 kubelet[2774]: E0813 01:13:02.543240 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:02.859095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165338642.mount: Deactivated successfully. Aug 13 01:13:03.627309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177537503.mount: Deactivated successfully. Aug 13 01:13:04.731323 containerd[1543]: time="2025-08-13T01:13:04.730597250Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:04.731323 containerd[1543]: time="2025-08-13T01:13:04.731235486Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 01:13:04.732116 containerd[1543]: time="2025-08-13T01:13:04.732091024Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:04.734048 containerd[1543]: time="2025-08-13T01:13:04.734025922Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:04.735227 containerd[1543]: time="2025-08-13T01:13:04.735202563Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.286556409s" Aug 13 01:13:04.735311 containerd[1543]: time="2025-08-13T01:13:04.735296603Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:13:04.740865 containerd[1543]: time="2025-08-13T01:13:04.740837605Z" level=info msg="CreateContainer within sandbox \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:13:04.749032 containerd[1543]: time="2025-08-13T01:13:04.748948761Z" level=info msg="Container 8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:13:04.754638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281502207.mount: Deactivated successfully. Aug 13 01:13:04.757579 containerd[1543]: time="2025-08-13T01:13:04.757541470Z" level=info msg="CreateContainer within sandbox \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\"" Aug 13 01:13:04.758838 containerd[1543]: time="2025-08-13T01:13:04.758773721Z" level=info msg="StartContainer for \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\"" Aug 13 01:13:04.760511 containerd[1543]: time="2025-08-13T01:13:04.760464088Z" level=info msg="connecting to shim 8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36" address="unix:///run/containerd/s/3e58bc29b33b5667a59122e74ad33b3964d3118493530bbaddcad3d10e6cb380" protocol=ttrpc version=3 Aug 13 01:13:04.785048 kubelet[2774]: E0813 01:13:04.784625 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:04.809144 kubelet[2774]: I0813 01:13:04.807716 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8p8zx" podStartSLOduration=3.807663086 podStartE2EDuration="3.807663086s" podCreationTimestamp="2025-08-13 01:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:13:02.55513619 +0000 UTC m=+6.192811619" watchObservedRunningTime="2025-08-13 01:13:04.807663086 +0000 UTC m=+8.445338515" Aug 13 01:13:04.836623 systemd[1]: Started cri-containerd-8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36.scope - libcontainer container 8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36. Aug 13 01:13:04.959060 containerd[1543]: time="2025-08-13T01:13:04.958936651Z" level=info msg="StartContainer for \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" returns successfully" Aug 13 01:13:05.553206 kubelet[2774]: E0813 01:13:05.553118 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:05.579642 kubelet[2774]: I0813 01:13:05.579520 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-2rkjv" podStartSLOduration=2.290883459 podStartE2EDuration="4.579498437s" podCreationTimestamp="2025-08-13 01:13:01 +0000 UTC" firstStartedPulling="2025-08-13 01:13:02.447602164 +0000 UTC m=+6.085277593" lastFinishedPulling="2025-08-13 01:13:04.736217142 +0000 UTC m=+8.373892571" observedRunningTime="2025-08-13 01:13:05.564304174 +0000 UTC m=+9.201979613" watchObservedRunningTime="2025-08-13 01:13:05.579498437 +0000 UTC m=+9.217173866" Aug 13 01:13:12.772731 sudo[1836]: pam_unix(sudo:session): session closed for user root Aug 13 01:13:12.831925 sshd[1835]: Connection closed by 147.75.109.163 port 56018 Aug 13 01:13:12.834891 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Aug 13 01:13:12.843221 systemd[1]: sshd@8-172.234.199.8:22-147.75.109.163:56018.service: Deactivated successfully. Aug 13 01:13:12.848567 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:13:12.849805 systemd[1]: session-9.scope: Consumed 6.005s CPU time, 229.4M memory peak. Aug 13 01:13:12.852906 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:13:12.856921 systemd-logind[1515]: Removed session 9. Aug 13 01:13:17.522970 systemd[1]: Created slice kubepods-besteffort-podbab297c3_df56_4274_9888_aff645b8c921.slice - libcontainer container kubepods-besteffort-podbab297c3_df56_4274_9888_aff645b8c921.slice. Aug 13 01:13:17.677896 kubelet[2774]: I0813 01:13:17.677821 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq58h\" (UniqueName: \"kubernetes.io/projected/bab297c3-df56-4274-9888-aff645b8c921-kube-api-access-jq58h\") pod \"calico-typha-bf7d6589c-n48dd\" (UID: \"bab297c3-df56-4274-9888-aff645b8c921\") " pod="calico-system/calico-typha-bf7d6589c-n48dd" Aug 13 01:13:17.677896 kubelet[2774]: I0813 01:13:17.677963 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab297c3-df56-4274-9888-aff645b8c921-tigera-ca-bundle\") pod \"calico-typha-bf7d6589c-n48dd\" (UID: \"bab297c3-df56-4274-9888-aff645b8c921\") " pod="calico-system/calico-typha-bf7d6589c-n48dd" Aug 13 01:13:17.677896 kubelet[2774]: I0813 01:13:17.677992 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bab297c3-df56-4274-9888-aff645b8c921-typha-certs\") pod \"calico-typha-bf7d6589c-n48dd\" (UID: \"bab297c3-df56-4274-9888-aff645b8c921\") " pod="calico-system/calico-typha-bf7d6589c-n48dd" Aug 13 01:13:17.779215 systemd[1]: Created slice kubepods-besteffort-pod0909eaba_89ba_4b02_b2f3_a17e3b6e2afc.slice - libcontainer container kubepods-besteffort-pod0909eaba_89ba_4b02_b2f3_a17e3b6e2afc.slice. Aug 13 01:13:17.828090 kubelet[2774]: E0813 01:13:17.828040 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:17.828949 containerd[1543]: time="2025-08-13T01:13:17.828642621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf7d6589c-n48dd,Uid:bab297c3-df56-4274-9888-aff645b8c921,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:17.880384 kubelet[2774]: I0813 01:13:17.880143 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-cni-net-dir\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.880384 kubelet[2774]: I0813 01:13:17.880191 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-var-lib-calico\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.880384 kubelet[2774]: I0813 01:13:17.880265 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-tigera-ca-bundle\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.880384 kubelet[2774]: I0813 01:13:17.880326 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-xtables-lock\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.880989 kubelet[2774]: I0813 01:13:17.880346 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2tsn\" (UniqueName: \"kubernetes.io/projected/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-kube-api-access-b2tsn\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.880989 kubelet[2774]: I0813 01:13:17.880709 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-flexvol-driver-host\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.880989 kubelet[2774]: I0813 01:13:17.880774 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-var-run-calico\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.880989 kubelet[2774]: I0813 01:13:17.880793 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-cni-bin-dir\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.880989 kubelet[2774]: I0813 01:13:17.880844 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-lib-modules\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.881101 kubelet[2774]: I0813 01:13:17.880933 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-cni-log-dir\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.881101 kubelet[2774]: I0813 01:13:17.880963 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-policysync\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.881101 kubelet[2774]: I0813 01:13:17.881019 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0909eaba-89ba-4b02-b2f3-a17e3b6e2afc-node-certs\") pod \"calico-node-9bc9j\" (UID: \"0909eaba-89ba-4b02-b2f3-a17e3b6e2afc\") " pod="calico-system/calico-node-9bc9j" Aug 13 01:13:17.903294 containerd[1543]: time="2025-08-13T01:13:17.902056389Z" level=info msg="connecting to shim f2cd3a3308be8bc9effb580536d611b6aa9c6139a425d8e01843b723d772b139" address="unix:///run/containerd/s/e02a2d7be9552e29765cbf145826c134622632a4a42223b1bb839390a40953e7" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:13:17.986292 kubelet[2774]: E0813 01:13:17.986052 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:17.987177 kubelet[2774]: W0813 01:13:17.986776 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:17.987177 kubelet[2774]: E0813 01:13:17.986818 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.008553 kubelet[2774]: E0813 01:13:18.005015 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.008553 kubelet[2774]: W0813 01:13:18.005050 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.008553 kubelet[2774]: E0813 01:13:18.005103 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.008553 kubelet[2774]: E0813 01:13:18.005510 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.008553 kubelet[2774]: W0813 01:13:18.005522 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.008553 kubelet[2774]: E0813 01:13:18.005560 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.008553 kubelet[2774]: E0813 01:13:18.006003 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.008553 kubelet[2774]: W0813 01:13:18.006012 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.008553 kubelet[2774]: E0813 01:13:18.006020 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.010308 kubelet[2774]: E0813 01:13:18.009334 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.010308 kubelet[2774]: W0813 01:13:18.009383 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.010308 kubelet[2774]: E0813 01:13:18.009398 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.010308 kubelet[2774]: E0813 01:13:18.009854 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.010308 kubelet[2774]: W0813 01:13:18.009866 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.010308 kubelet[2774]: E0813 01:13:18.009874 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.010308 kubelet[2774]: E0813 01:13:18.010064 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.010308 kubelet[2774]: W0813 01:13:18.010072 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.010308 kubelet[2774]: E0813 01:13:18.010112 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.014049 kubelet[2774]: E0813 01:13:18.010397 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.014049 kubelet[2774]: W0813 01:13:18.010427 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.014049 kubelet[2774]: E0813 01:13:18.010440 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.014049 kubelet[2774]: E0813 01:13:18.011578 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.014049 kubelet[2774]: W0813 01:13:18.011587 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.014049 kubelet[2774]: E0813 01:13:18.011599 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.014049 kubelet[2774]: E0813 01:13:18.011761 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.014049 kubelet[2774]: W0813 01:13:18.011768 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.014049 kubelet[2774]: E0813 01:13:18.011797 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.014049 kubelet[2774]: E0813 01:13:18.011939 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.014261 kubelet[2774]: W0813 01:13:18.011947 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.014261 kubelet[2774]: E0813 01:13:18.011956 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.014261 kubelet[2774]: E0813 01:13:18.012158 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.014261 kubelet[2774]: W0813 01:13:18.012168 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.014261 kubelet[2774]: E0813 01:13:18.012180 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.014261 kubelet[2774]: E0813 01:13:18.012700 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.014261 kubelet[2774]: W0813 01:13:18.012712 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.014261 kubelet[2774]: E0813 01:13:18.012723 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.017389 kubelet[2774]: E0813 01:13:18.015006 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.017389 kubelet[2774]: W0813 01:13:18.015026 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.017389 kubelet[2774]: E0813 01:13:18.015035 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.022938 kubelet[2774]: E0813 01:13:18.022787 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.022938 kubelet[2774]: W0813 01:13:18.022814 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.022938 kubelet[2774]: E0813 01:13:18.022839 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.024420 kubelet[2774]: E0813 01:13:18.023422 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.024420 kubelet[2774]: W0813 01:13:18.023633 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.024420 kubelet[2774]: E0813 01:13:18.023655 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.024420 kubelet[2774]: E0813 01:13:18.023863 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.024420 kubelet[2774]: W0813 01:13:18.023870 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.024420 kubelet[2774]: E0813 01:13:18.024032 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.024420 kubelet[2774]: W0813 01:13:18.024040 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.024420 kubelet[2774]: E0813 01:13:18.024049 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.024420 kubelet[2774]: E0813 01:13:18.024156 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.024420 kubelet[2774]: E0813 01:13:18.024225 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.024970 kubelet[2774]: W0813 01:13:18.024231 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.024970 kubelet[2774]: E0813 01:13:18.024240 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.024970 kubelet[2774]: E0813 01:13:18.024458 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.024970 kubelet[2774]: W0813 01:13:18.024467 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.024970 kubelet[2774]: E0813 01:13:18.024488 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.024970 kubelet[2774]: E0813 01:13:18.024856 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.024970 kubelet[2774]: W0813 01:13:18.024864 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.024970 kubelet[2774]: E0813 01:13:18.024885 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.025225 kubelet[2774]: E0813 01:13:18.025092 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.025225 kubelet[2774]: W0813 01:13:18.025100 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.025225 kubelet[2774]: E0813 01:13:18.025108 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.026769 kubelet[2774]: E0813 01:13:18.025300 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.026769 kubelet[2774]: W0813 01:13:18.025308 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.026769 kubelet[2774]: E0813 01:13:18.025324 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.026769 kubelet[2774]: E0813 01:13:18.025505 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.026769 kubelet[2774]: W0813 01:13:18.025514 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.026769 kubelet[2774]: E0813 01:13:18.025534 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.026769 kubelet[2774]: E0813 01:13:18.025785 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.026769 kubelet[2774]: W0813 01:13:18.025793 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.026769 kubelet[2774]: E0813 01:13:18.025819 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.026769 kubelet[2774]: E0813 01:13:18.026332 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.026986 kubelet[2774]: W0813 01:13:18.026459 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.026986 kubelet[2774]: E0813 01:13:18.026749 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.029824 kubelet[2774]: E0813 01:13:18.028743 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.029824 kubelet[2774]: W0813 01:13:18.028761 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.029824 kubelet[2774]: E0813 01:13:18.028770 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.046640 systemd[1]: Started cri-containerd-f2cd3a3308be8bc9effb580536d611b6aa9c6139a425d8e01843b723d772b139.scope - libcontainer container f2cd3a3308be8bc9effb580536d611b6aa9c6139a425d8e01843b723d772b139. Aug 13 01:13:18.060552 kubelet[2774]: E0813 01:13:18.060503 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.060552 kubelet[2774]: W0813 01:13:18.060532 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.060552 kubelet[2774]: E0813 01:13:18.060555 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.090386 kubelet[2774]: E0813 01:13:18.089916 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:13:18.111885 containerd[1543]: time="2025-08-13T01:13:18.111836546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9bc9j,Uid:0909eaba-89ba-4b02-b2f3-a17e3b6e2afc,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:18.124566 kubelet[2774]: E0813 01:13:18.124522 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.124768 kubelet[2774]: W0813 01:13:18.124685 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.124768 kubelet[2774]: E0813 01:13:18.124715 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.126213 kubelet[2774]: E0813 01:13:18.126194 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.126213 kubelet[2774]: W0813 01:13:18.126211 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.126213 kubelet[2774]: E0813 01:13:18.126222 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.126457 kubelet[2774]: E0813 01:13:18.126440 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.126630 kubelet[2774]: W0813 01:13:18.126529 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.126630 kubelet[2774]: E0813 01:13:18.126540 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.127292 kubelet[2774]: E0813 01:13:18.127247 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.127292 kubelet[2774]: W0813 01:13:18.127263 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.127292 kubelet[2774]: E0813 01:13:18.127272 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.128378 kubelet[2774]: E0813 01:13:18.128339 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.128963 kubelet[2774]: W0813 01:13:18.128439 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.128963 kubelet[2774]: E0813 01:13:18.128454 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.150824 kubelet[2774]: E0813 01:13:18.147992 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.150824 kubelet[2774]: W0813 01:13:18.148016 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.150824 kubelet[2774]: E0813 01:13:18.148041 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.150824 kubelet[2774]: E0813 01:13:18.148303 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.150824 kubelet[2774]: W0813 01:13:18.148316 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.150824 kubelet[2774]: E0813 01:13:18.148330 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.150824 kubelet[2774]: E0813 01:13:18.148546 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.150824 kubelet[2774]: W0813 01:13:18.148557 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.150824 kubelet[2774]: E0813 01:13:18.148569 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.150824 kubelet[2774]: E0813 01:13:18.148732 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.151178 kubelet[2774]: W0813 01:13:18.148741 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.151178 kubelet[2774]: E0813 01:13:18.148749 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.151178 kubelet[2774]: E0813 01:13:18.148944 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.151178 kubelet[2774]: W0813 01:13:18.148953 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.151178 kubelet[2774]: E0813 01:13:18.148986 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.151178 kubelet[2774]: E0813 01:13:18.149161 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.151178 kubelet[2774]: W0813 01:13:18.149169 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.151178 kubelet[2774]: E0813 01:13:18.149176 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.151178 kubelet[2774]: E0813 01:13:18.149386 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.151178 kubelet[2774]: W0813 01:13:18.149394 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.151422 kubelet[2774]: E0813 01:13:18.149401 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.151422 kubelet[2774]: E0813 01:13:18.149767 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.151422 kubelet[2774]: W0813 01:13:18.149774 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.151422 kubelet[2774]: E0813 01:13:18.149781 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.151648 kubelet[2774]: E0813 01:13:18.151541 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.151648 kubelet[2774]: W0813 01:13:18.151554 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.151648 kubelet[2774]: E0813 01:13:18.151564 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.151962 kubelet[2774]: E0813 01:13:18.151832 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.151962 kubelet[2774]: W0813 01:13:18.151840 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.151962 kubelet[2774]: E0813 01:13:18.151848 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.153240 kubelet[2774]: E0813 01:13:18.152052 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.153240 kubelet[2774]: W0813 01:13:18.152062 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.153240 kubelet[2774]: E0813 01:13:18.152070 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.153240 kubelet[2774]: E0813 01:13:18.152249 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.153240 kubelet[2774]: W0813 01:13:18.152256 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.153240 kubelet[2774]: E0813 01:13:18.152267 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.153240 kubelet[2774]: E0813 01:13:18.152973 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.153240 kubelet[2774]: W0813 01:13:18.152982 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.153240 kubelet[2774]: E0813 01:13:18.152990 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.153240 kubelet[2774]: E0813 01:13:18.153152 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.153481 kubelet[2774]: W0813 01:13:18.153159 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.153481 kubelet[2774]: E0813 01:13:18.153167 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.154053 kubelet[2774]: E0813 01:13:18.153952 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.154053 kubelet[2774]: W0813 01:13:18.153963 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.154053 kubelet[2774]: E0813 01:13:18.153972 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.198144 containerd[1543]: time="2025-08-13T01:13:18.183613942Z" level=info msg="connecting to shim 76ae3444196225a303323f3547299db8a8e29579071416116c7a062f7244cd3f" address="unix:///run/containerd/s/cf9b7ff19b55ec6486c0acc44bf3240394cd63a1845dc6c2022ea4b685d3e5e0" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:13:18.198319 kubelet[2774]: E0813 01:13:18.191043 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.198319 kubelet[2774]: W0813 01:13:18.191065 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.198319 kubelet[2774]: E0813 01:13:18.191111 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.198319 kubelet[2774]: I0813 01:13:18.191143 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqfh\" (UniqueName: \"kubernetes.io/projected/e1069218-cdb9-4130-adce-4bdd23361a59-kube-api-access-8bqfh\") pod \"csi-node-driver-bc7dg\" (UID: \"e1069218-cdb9-4130-adce-4bdd23361a59\") " pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:18.198319 kubelet[2774]: E0813 01:13:18.191405 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.198319 kubelet[2774]: W0813 01:13:18.191417 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.198319 kubelet[2774]: E0813 01:13:18.191463 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.198319 kubelet[2774]: I0813 01:13:18.191480 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e1069218-cdb9-4130-adce-4bdd23361a59-socket-dir\") pod \"csi-node-driver-bc7dg\" (UID: \"e1069218-cdb9-4130-adce-4bdd23361a59\") " pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:18.198319 kubelet[2774]: E0813 01:13:18.191941 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.198621 kubelet[2774]: W0813 01:13:18.191955 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.198621 kubelet[2774]: E0813 01:13:18.192170 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.198621 kubelet[2774]: W0813 01:13:18.192180 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.198621 kubelet[2774]: E0813 01:13:18.192242 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.198621 kubelet[2774]: I0813 01:13:18.192260 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1069218-cdb9-4130-adce-4bdd23361a59-kubelet-dir\") pod \"csi-node-driver-bc7dg\" (UID: \"e1069218-cdb9-4130-adce-4bdd23361a59\") " pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:18.198621 kubelet[2774]: E0813 01:13:18.192286 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.198621 kubelet[2774]: E0813 01:13:18.193461 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.198621 kubelet[2774]: W0813 01:13:18.193472 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.198621 kubelet[2774]: E0813 01:13:18.193513 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.198847 kubelet[2774]: E0813 01:13:18.194100 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.198847 kubelet[2774]: W0813 01:13:18.194109 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.198847 kubelet[2774]: E0813 01:13:18.194179 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.198847 kubelet[2774]: E0813 01:13:18.194722 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.198847 kubelet[2774]: W0813 01:13:18.194732 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.198847 kubelet[2774]: E0813 01:13:18.194904 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.198847 kubelet[2774]: I0813 01:13:18.194923 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e1069218-cdb9-4130-adce-4bdd23361a59-registration-dir\") pod \"csi-node-driver-bc7dg\" (UID: \"e1069218-cdb9-4130-adce-4bdd23361a59\") " pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:18.198847 kubelet[2774]: E0813 01:13:18.195426 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.198847 kubelet[2774]: W0813 01:13:18.195462 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.199063 kubelet[2774]: E0813 01:13:18.195476 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.199063 kubelet[2774]: E0813 01:13:18.196166 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.199063 kubelet[2774]: W0813 01:13:18.196182 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.199063 kubelet[2774]: E0813 01:13:18.196231 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.199063 kubelet[2774]: I0813 01:13:18.196253 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e1069218-cdb9-4130-adce-4bdd23361a59-varrun\") pod \"csi-node-driver-bc7dg\" (UID: \"e1069218-cdb9-4130-adce-4bdd23361a59\") " pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:18.199063 kubelet[2774]: E0813 01:13:18.196937 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.199063 kubelet[2774]: W0813 01:13:18.196947 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.199063 kubelet[2774]: E0813 01:13:18.197008 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.199063 kubelet[2774]: E0813 01:13:18.197541 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.199236 kubelet[2774]: W0813 01:13:18.197550 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.199236 kubelet[2774]: E0813 01:13:18.197559 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.199236 kubelet[2774]: E0813 01:13:18.198288 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.199236 kubelet[2774]: W0813 01:13:18.198297 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.199236 kubelet[2774]: E0813 01:13:18.198305 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.199339 kubelet[2774]: E0813 01:13:18.199316 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.199339 kubelet[2774]: W0813 01:13:18.199324 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.199404 kubelet[2774]: E0813 01:13:18.199367 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.199673 kubelet[2774]: E0813 01:13:18.199655 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.199673 kubelet[2774]: W0813 01:13:18.199668 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.199735 kubelet[2774]: E0813 01:13:18.199676 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.219091 kubelet[2774]: E0813 01:13:18.219046 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.219091 kubelet[2774]: W0813 01:13:18.219075 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.219091 kubelet[2774]: E0813 01:13:18.219097 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.322793 kubelet[2774]: E0813 01:13:18.319690 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.323188 kubelet[2774]: W0813 01:13:18.323163 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.323483 kubelet[2774]: E0813 01:13:18.323279 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.323857 kubelet[2774]: E0813 01:13:18.323716 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.323857 kubelet[2774]: W0813 01:13:18.323730 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.323857 kubelet[2774]: E0813 01:13:18.323742 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.324302 kubelet[2774]: E0813 01:13:18.324077 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.324302 kubelet[2774]: W0813 01:13:18.324127 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.324302 kubelet[2774]: E0813 01:13:18.324153 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.325239 kubelet[2774]: E0813 01:13:18.324548 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.325239 kubelet[2774]: W0813 01:13:18.324560 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.325239 kubelet[2774]: E0813 01:13:18.324583 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.325239 kubelet[2774]: E0813 01:13:18.324851 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.325239 kubelet[2774]: W0813 01:13:18.324859 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.325239 kubelet[2774]: E0813 01:13:18.324886 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.325239 kubelet[2774]: E0813 01:13:18.325107 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.325239 kubelet[2774]: W0813 01:13:18.325115 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.325239 kubelet[2774]: E0813 01:13:18.325137 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.327168 kubelet[2774]: E0813 01:13:18.326107 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.327168 kubelet[2774]: W0813 01:13:18.326119 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.327168 kubelet[2774]: E0813 01:13:18.326309 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.327168 kubelet[2774]: E0813 01:13:18.326383 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.327168 kubelet[2774]: W0813 01:13:18.326390 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.327168 kubelet[2774]: E0813 01:13:18.326474 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.327168 kubelet[2774]: E0813 01:13:18.326788 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.327168 kubelet[2774]: W0813 01:13:18.326796 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.327168 kubelet[2774]: E0813 01:13:18.326876 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.327168 kubelet[2774]: E0813 01:13:18.326998 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.327424 kubelet[2774]: W0813 01:13:18.327006 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.327424 kubelet[2774]: E0813 01:13:18.327049 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.327702 kubelet[2774]: E0813 01:13:18.327677 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.327702 kubelet[2774]: W0813 01:13:18.327689 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.328100 kubelet[2774]: E0813 01:13:18.327841 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.328100 kubelet[2774]: E0813 01:13:18.327969 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.328100 kubelet[2774]: W0813 01:13:18.327979 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.328100 kubelet[2774]: E0813 01:13:18.328062 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.328326 kubelet[2774]: E0813 01:13:18.328315 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.328459 kubelet[2774]: W0813 01:13:18.328405 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.328571 kubelet[2774]: E0813 01:13:18.328524 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.329000 kubelet[2774]: E0813 01:13:18.328874 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.329000 kubelet[2774]: W0813 01:13:18.328884 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.329198 kubelet[2774]: E0813 01:13:18.329155 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.329325 kubelet[2774]: E0813 01:13:18.329264 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.329325 kubelet[2774]: W0813 01:13:18.329276 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.329467 kubelet[2774]: E0813 01:13:18.329411 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.329593 kubelet[2774]: E0813 01:13:18.329582 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.329641 kubelet[2774]: W0813 01:13:18.329630 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.329767 kubelet[2774]: E0813 01:13:18.329755 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.329952 kubelet[2774]: E0813 01:13:18.329941 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.330001 kubelet[2774]: W0813 01:13:18.329991 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.330282 kubelet[2774]: E0813 01:13:18.330218 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.330435 kubelet[2774]: E0813 01:13:18.330332 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.330435 kubelet[2774]: W0813 01:13:18.330342 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.330610 kubelet[2774]: E0813 01:13:18.330552 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.330709 kubelet[2774]: E0813 01:13:18.330689 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.330905 kubelet[2774]: W0813 01:13:18.330768 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.330905 kubelet[2774]: E0813 01:13:18.330883 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.331114 kubelet[2774]: E0813 01:13:18.331102 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.331168 kubelet[2774]: W0813 01:13:18.331157 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.331283 kubelet[2774]: E0813 01:13:18.331272 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.331482 kubelet[2774]: E0813 01:13:18.331471 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.331534 kubelet[2774]: W0813 01:13:18.331524 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.332094 kubelet[2774]: E0813 01:13:18.332079 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.332321 kubelet[2774]: E0813 01:13:18.332308 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.332412 kubelet[2774]: W0813 01:13:18.332401 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.332525 kubelet[2774]: E0813 01:13:18.332512 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.332759 kubelet[2774]: E0813 01:13:18.332747 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.332811 kubelet[2774]: W0813 01:13:18.332799 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.332966 kubelet[2774]: E0813 01:13:18.332953 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.333200 kubelet[2774]: E0813 01:13:18.333189 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.333258 kubelet[2774]: W0813 01:13:18.333248 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.333406 kubelet[2774]: E0813 01:13:18.333393 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.334790 systemd[1]: Started cri-containerd-76ae3444196225a303323f3547299db8a8e29579071416116c7a062f7244cd3f.scope - libcontainer container 76ae3444196225a303323f3547299db8a8e29579071416116c7a062f7244cd3f. Aug 13 01:13:18.335464 kubelet[2774]: E0813 01:13:18.335448 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.335576 kubelet[2774]: W0813 01:13:18.335564 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.335719 kubelet[2774]: E0813 01:13:18.335706 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.367637 kubelet[2774]: E0813 01:13:18.367604 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:18.367841 kubelet[2774]: W0813 01:13:18.367825 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:18.369704 kubelet[2774]: E0813 01:13:18.369679 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:18.403196 containerd[1543]: time="2025-08-13T01:13:18.403119559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf7d6589c-n48dd,Uid:bab297c3-df56-4274-9888-aff645b8c921,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2cd3a3308be8bc9effb580536d611b6aa9c6139a425d8e01843b723d772b139\"" Aug 13 01:13:18.406362 kubelet[2774]: E0813 01:13:18.405066 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:18.407375 containerd[1543]: time="2025-08-13T01:13:18.407333659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 01:13:18.452262 containerd[1543]: time="2025-08-13T01:13:18.452171742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9bc9j,Uid:0909eaba-89ba-4b02-b2f3-a17e3b6e2afc,Namespace:calico-system,Attempt:0,} returns sandbox id \"76ae3444196225a303323f3547299db8a8e29579071416116c7a062f7244cd3f\"" Aug 13 01:13:19.152562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576845776.mount: Deactivated successfully. Aug 13 01:13:20.504242 kubelet[2774]: E0813 01:13:20.504156 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:13:20.645758 containerd[1543]: time="2025-08-13T01:13:20.645683816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:20.646754 containerd[1543]: time="2025-08-13T01:13:20.646613601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 01:13:20.647328 containerd[1543]: time="2025-08-13T01:13:20.647289473Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:20.649398 containerd[1543]: time="2025-08-13T01:13:20.649336633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:20.650045 containerd[1543]: time="2025-08-13T01:13:20.650006316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.242554026s" Aug 13 01:13:20.650126 containerd[1543]: time="2025-08-13T01:13:20.650109807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 01:13:20.653186 containerd[1543]: time="2025-08-13T01:13:20.653144911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:13:20.695728 containerd[1543]: time="2025-08-13T01:13:20.695670466Z" level=info msg="CreateContainer within sandbox \"f2cd3a3308be8bc9effb580536d611b6aa9c6139a425d8e01843b723d772b139\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 01:13:20.707119 containerd[1543]: time="2025-08-13T01:13:20.705408370Z" level=info msg="Container e1b153c665fb0e47e6b38d54d2e8da19cec5e94b02fa46647134c5f7b16b6c1f: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:13:20.717139 containerd[1543]: time="2025-08-13T01:13:20.717081115Z" level=info msg="CreateContainer within sandbox \"f2cd3a3308be8bc9effb580536d611b6aa9c6139a425d8e01843b723d772b139\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e1b153c665fb0e47e6b38d54d2e8da19cec5e94b02fa46647134c5f7b16b6c1f\"" Aug 13 01:13:20.719783 containerd[1543]: time="2025-08-13T01:13:20.718788362Z" level=info msg="StartContainer for \"e1b153c665fb0e47e6b38d54d2e8da19cec5e94b02fa46647134c5f7b16b6c1f\"" Aug 13 01:13:20.722143 containerd[1543]: time="2025-08-13T01:13:20.722078587Z" level=info msg="connecting to shim e1b153c665fb0e47e6b38d54d2e8da19cec5e94b02fa46647134c5f7b16b6c1f" address="unix:///run/containerd/s/e02a2d7be9552e29765cbf145826c134622632a4a42223b1bb839390a40953e7" protocol=ttrpc version=3 Aug 13 01:13:20.802881 systemd[1]: Started cri-containerd-e1b153c665fb0e47e6b38d54d2e8da19cec5e94b02fa46647134c5f7b16b6c1f.scope - libcontainer container e1b153c665fb0e47e6b38d54d2e8da19cec5e94b02fa46647134c5f7b16b6c1f. Aug 13 01:13:20.948171 containerd[1543]: time="2025-08-13T01:13:20.947987684Z" level=info msg="StartContainer for \"e1b153c665fb0e47e6b38d54d2e8da19cec5e94b02fa46647134c5f7b16b6c1f\" returns successfully" Aug 13 01:13:21.591869 containerd[1543]: time="2025-08-13T01:13:21.591786907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:21.592844 containerd[1543]: time="2025-08-13T01:13:21.592615920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 01:13:21.593677 containerd[1543]: time="2025-08-13T01:13:21.593605094Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:21.595402 containerd[1543]: time="2025-08-13T01:13:21.595336012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:21.596612 containerd[1543]: time="2025-08-13T01:13:21.596319497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 943.032876ms" Aug 13 01:13:21.596612 containerd[1543]: time="2025-08-13T01:13:21.596394557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:13:21.601777 containerd[1543]: time="2025-08-13T01:13:21.601570149Z" level=info msg="CreateContainer within sandbox \"76ae3444196225a303323f3547299db8a8e29579071416116c7a062f7244cd3f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:13:21.614378 containerd[1543]: time="2025-08-13T01:13:21.612619918Z" level=info msg="Container 3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:13:21.623585 containerd[1543]: time="2025-08-13T01:13:21.622811974Z" level=info msg="CreateContainer within sandbox \"76ae3444196225a303323f3547299db8a8e29579071416116c7a062f7244cd3f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9\"" Aug 13 01:13:21.625580 containerd[1543]: time="2025-08-13T01:13:21.625554806Z" level=info msg="StartContainer for \"3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9\"" Aug 13 01:13:21.628102 containerd[1543]: time="2025-08-13T01:13:21.627992427Z" level=info msg="connecting to shim 3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9" address="unix:///run/containerd/s/cf9b7ff19b55ec6486c0acc44bf3240394cd63a1845dc6c2022ea4b685d3e5e0" protocol=ttrpc version=3 Aug 13 01:13:21.630101 kubelet[2774]: E0813 01:13:21.629948 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:21.721129 kubelet[2774]: E0813 01:13:21.721083 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.721129 kubelet[2774]: W0813 01:13:21.721122 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.721482 kubelet[2774]: E0813 01:13:21.721151 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.721482 kubelet[2774]: E0813 01:13:21.721422 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.721482 kubelet[2774]: W0813 01:13:21.721432 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.721978 kubelet[2774]: E0813 01:13:21.721446 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.722285 kubelet[2774]: E0813 01:13:21.722261 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.722285 kubelet[2774]: W0813 01:13:21.722275 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.722486 kubelet[2774]: E0813 01:13:21.722288 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.722999 kubelet[2774]: E0813 01:13:21.722783 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.722999 kubelet[2774]: W0813 01:13:21.722797 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.722999 kubelet[2774]: E0813 01:13:21.722808 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.723219 kubelet[2774]: E0813 01:13:21.723098 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.723219 kubelet[2774]: W0813 01:13:21.723109 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.723219 kubelet[2774]: E0813 01:13:21.723121 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.723412 kubelet[2774]: E0813 01:13:21.723316 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.723412 kubelet[2774]: W0813 01:13:21.723325 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.723412 kubelet[2774]: E0813 01:13:21.723336 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.723922 kubelet[2774]: E0813 01:13:21.723554 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.723922 kubelet[2774]: W0813 01:13:21.723568 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.723922 kubelet[2774]: E0813 01:13:21.723579 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.724046 kubelet[2774]: E0813 01:13:21.723975 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.724046 kubelet[2774]: W0813 01:13:21.723985 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.724046 kubelet[2774]: E0813 01:13:21.723995 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.724326 kubelet[2774]: E0813 01:13:21.724204 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.724326 kubelet[2774]: W0813 01:13:21.724221 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.724326 kubelet[2774]: E0813 01:13:21.724235 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.724758 kubelet[2774]: E0813 01:13:21.724457 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.724758 kubelet[2774]: W0813 01:13:21.724468 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.724758 kubelet[2774]: E0813 01:13:21.724479 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.725343 kubelet[2774]: E0813 01:13:21.724897 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.725343 kubelet[2774]: W0813 01:13:21.724910 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.725343 kubelet[2774]: E0813 01:13:21.724922 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.725343 kubelet[2774]: E0813 01:13:21.725161 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.725343 kubelet[2774]: W0813 01:13:21.725172 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.725343 kubelet[2774]: E0813 01:13:21.725182 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.725572 kubelet[2774]: E0813 01:13:21.725426 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.725572 kubelet[2774]: W0813 01:13:21.725436 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.725572 kubelet[2774]: E0813 01:13:21.725448 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.726364 kubelet[2774]: E0813 01:13:21.725913 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.726364 kubelet[2774]: W0813 01:13:21.725927 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.726364 kubelet[2774]: E0813 01:13:21.725938 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.726364 kubelet[2774]: E0813 01:13:21.726131 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.726364 kubelet[2774]: W0813 01:13:21.726196 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.726364 kubelet[2774]: E0813 01:13:21.726209 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.760642 systemd[1]: Started cri-containerd-3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9.scope - libcontainer container 3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9. Aug 13 01:13:21.810150 kubelet[2774]: E0813 01:13:21.810015 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.810150 kubelet[2774]: W0813 01:13:21.810045 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.810150 kubelet[2774]: E0813 01:13:21.810069 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.811277 kubelet[2774]: E0813 01:13:21.811203 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.811328 kubelet[2774]: W0813 01:13:21.811217 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.811483 kubelet[2774]: E0813 01:13:21.811391 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.811758 kubelet[2774]: E0813 01:13:21.811726 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.811758 kubelet[2774]: W0813 01:13:21.811738 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.811953 kubelet[2774]: E0813 01:13:21.811938 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.812239 kubelet[2774]: E0813 01:13:21.812225 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.812327 kubelet[2774]: W0813 01:13:21.812275 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.812507 kubelet[2774]: E0813 01:13:21.812495 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.812754 kubelet[2774]: E0813 01:13:21.812731 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.812754 kubelet[2774]: W0813 01:13:21.812742 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.812904 kubelet[2774]: E0813 01:13:21.812799 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.813140 kubelet[2774]: E0813 01:13:21.813120 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.813140 kubelet[2774]: W0813 01:13:21.813134 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.813233 kubelet[2774]: E0813 01:13:21.813216 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.813387 kubelet[2774]: E0813 01:13:21.813343 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.813387 kubelet[2774]: W0813 01:13:21.813370 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.813465 kubelet[2774]: E0813 01:13:21.813451 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.813661 kubelet[2774]: E0813 01:13:21.813649 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.813661 kubelet[2774]: W0813 01:13:21.813659 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.813718 kubelet[2774]: E0813 01:13:21.813672 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.813859 kubelet[2774]: E0813 01:13:21.813846 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.813893 kubelet[2774]: W0813 01:13:21.813857 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.813913 kubelet[2774]: E0813 01:13:21.813902 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.814139 kubelet[2774]: E0813 01:13:21.814126 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.814139 kubelet[2774]: W0813 01:13:21.814138 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.814189 kubelet[2774]: E0813 01:13:21.814159 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.814431 kubelet[2774]: E0813 01:13:21.814417 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.814431 kubelet[2774]: W0813 01:13:21.814429 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.814520 kubelet[2774]: E0813 01:13:21.814437 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.814915 kubelet[2774]: E0813 01:13:21.814891 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.814946 kubelet[2774]: W0813 01:13:21.814918 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.814946 kubelet[2774]: E0813 01:13:21.814927 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.815245 kubelet[2774]: E0813 01:13:21.815226 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.815245 kubelet[2774]: W0813 01:13:21.815240 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.815379 kubelet[2774]: E0813 01:13:21.815341 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.815758 kubelet[2774]: E0813 01:13:21.815686 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.815758 kubelet[2774]: W0813 01:13:21.815750 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.815806 kubelet[2774]: E0813 01:13:21.815764 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.816460 kubelet[2774]: E0813 01:13:21.816449 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.816460 kubelet[2774]: W0813 01:13:21.816460 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.816507 kubelet[2774]: E0813 01:13:21.816468 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.816632 kubelet[2774]: E0813 01:13:21.816620 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.816632 kubelet[2774]: W0813 01:13:21.816630 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.816856 kubelet[2774]: E0813 01:13:21.816823 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.817012 kubelet[2774]: E0813 01:13:21.816999 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.817012 kubelet[2774]: W0813 01:13:21.817010 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.817054 kubelet[2774]: E0813 01:13:21.817018 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.817489 kubelet[2774]: E0813 01:13:21.817469 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:13:21.817489 kubelet[2774]: W0813 01:13:21.817482 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:13:21.817538 kubelet[2774]: E0813 01:13:21.817491 2774 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:13:21.903238 containerd[1543]: time="2025-08-13T01:13:21.902082951Z" level=info msg="StartContainer for \"3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9\" returns successfully" Aug 13 01:13:21.957457 systemd[1]: cri-containerd-3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9.scope: Deactivated successfully. Aug 13 01:13:21.962079 containerd[1543]: time="2025-08-13T01:13:21.962018547Z" level=info msg="received exit event container_id:\"3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9\" id:\"3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9\" pid:3446 exited_at:{seconds:1755047601 nanos:961196733}" Aug 13 01:13:21.962079 containerd[1543]: time="2025-08-13T01:13:21.962046047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9\" id:\"3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9\" pid:3446 exited_at:{seconds:1755047601 nanos:961196733}" Aug 13 01:13:22.007776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ae559c3c829be98d3754d2057eaa86896d59389af9ff5fa3fc68270258027b9-rootfs.mount: Deactivated successfully. Aug 13 01:13:22.507843 kubelet[2774]: E0813 01:13:22.506628 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:13:22.635269 kubelet[2774]: E0813 01:13:22.634835 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:22.639246 containerd[1543]: time="2025-08-13T01:13:22.638483597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:13:22.655082 kubelet[2774]: I0813 01:13:22.654868 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bf7d6589c-n48dd" podStartSLOduration=3.41003726 podStartE2EDuration="5.654850127s" podCreationTimestamp="2025-08-13 01:13:17 +0000 UTC" firstStartedPulling="2025-08-13 01:13:18.406835767 +0000 UTC m=+22.044511196" lastFinishedPulling="2025-08-13 01:13:20.651648634 +0000 UTC m=+24.289324063" observedRunningTime="2025-08-13 01:13:21.660576991 +0000 UTC m=+25.298252420" watchObservedRunningTime="2025-08-13 01:13:22.654850127 +0000 UTC m=+26.292525556" Aug 13 01:13:23.642531 kubelet[2774]: E0813 01:13:23.642469 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:24.504091 kubelet[2774]: E0813 01:13:24.504030 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:13:24.644436 kubelet[2774]: E0813 01:13:24.644389 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:26.360987 containerd[1543]: time="2025-08-13T01:13:26.360917060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:26.362190 containerd[1543]: time="2025-08-13T01:13:26.362158224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 01:13:26.363035 containerd[1543]: time="2025-08-13T01:13:26.362996687Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:26.364997 containerd[1543]: time="2025-08-13T01:13:26.364961145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:13:26.365584 containerd[1543]: time="2025-08-13T01:13:26.365473027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.726519158s" Aug 13 01:13:26.365584 containerd[1543]: time="2025-08-13T01:13:26.365499307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:13:26.368171 containerd[1543]: time="2025-08-13T01:13:26.368130846Z" level=info msg="CreateContainer within sandbox \"76ae3444196225a303323f3547299db8a8e29579071416116c7a062f7244cd3f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:13:26.398044 containerd[1543]: time="2025-08-13T01:13:26.396422223Z" level=info msg="Container 3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:13:26.407320 containerd[1543]: time="2025-08-13T01:13:26.407269153Z" level=info msg="CreateContainer within sandbox \"76ae3444196225a303323f3547299db8a8e29579071416116c7a062f7244cd3f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630\"" Aug 13 01:13:26.408336 containerd[1543]: time="2025-08-13T01:13:26.408263267Z" level=info msg="StartContainer for \"3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630\"" Aug 13 01:13:26.412479 containerd[1543]: time="2025-08-13T01:13:26.412450163Z" level=info msg="connecting to shim 3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630" address="unix:///run/containerd/s/cf9b7ff19b55ec6486c0acc44bf3240394cd63a1845dc6c2022ea4b685d3e5e0" protocol=ttrpc version=3 Aug 13 01:13:26.498777 systemd[1]: Started cri-containerd-3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630.scope - libcontainer container 3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630. Aug 13 01:13:26.512402 kubelet[2774]: E0813 01:13:26.505620 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:13:26.652602 containerd[1543]: time="2025-08-13T01:13:26.650925769Z" level=info msg="StartContainer for \"3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630\" returns successfully" Aug 13 01:13:28.504586 kubelet[2774]: E0813 01:13:28.504537 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:13:28.808234 containerd[1543]: time="2025-08-13T01:13:28.808073910Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:13:28.812528 systemd[1]: cri-containerd-3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630.scope: Deactivated successfully. Aug 13 01:13:28.812914 systemd[1]: cri-containerd-3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630.scope: Consumed 2.262s CPU time, 192.1M memory peak, 171.2M written to disk. Aug 13 01:13:28.814738 containerd[1543]: time="2025-08-13T01:13:28.814615563Z" level=info msg="received exit event container_id:\"3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630\" id:\"3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630\" pid:3524 exited_at:{seconds:1755047608 nanos:814325132}" Aug 13 01:13:28.814738 containerd[1543]: time="2025-08-13T01:13:28.814711933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630\" id:\"3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630\" pid:3524 exited_at:{seconds:1755047608 nanos:814325132}" Aug 13 01:13:28.844585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f63a025f78c468bc3dce98a47f80ab45cba1df87b16fc6b0df5704a0f46a630-rootfs.mount: Deactivated successfully. Aug 13 01:13:28.904239 kubelet[2774]: I0813 01:13:28.903872 2774 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:13:28.947473 kubelet[2774]: I0813 01:13:28.946927 2774 status_manager.go:890] "Failed to get status for pod" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" err="pods \"calico-kube-controllers-c5f875d88-fvdcx\" is forbidden: User \"system:node:172-234-199-8\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-234-199-8' and this object" Aug 13 01:13:28.960325 systemd[1]: Created slice kubepods-besteffort-pode328d277_9a4f_4cd1_abd9_2dfeda4cfcc9.slice - libcontainer container kubepods-besteffort-pode328d277_9a4f_4cd1_abd9_2dfeda4cfcc9.slice. Aug 13 01:13:28.987378 systemd[1]: Created slice kubepods-burstable-pod713f2d42_12d1_4f50_bdc7_9964c95a9e2a.slice - libcontainer container kubepods-burstable-pod713f2d42_12d1_4f50_bdc7_9964c95a9e2a.slice. Aug 13 01:13:28.999510 systemd[1]: Created slice kubepods-besteffort-pod375926b9_4069_4b73_ae15_ed314a449202.slice - libcontainer container kubepods-besteffort-pod375926b9_4069_4b73_ae15_ed314a449202.slice. Aug 13 01:13:29.010003 systemd[1]: Created slice kubepods-besteffort-pode2857874_c208_419c_8864_94ca990f4933.slice - libcontainer container kubepods-besteffort-pode2857874_c208_419c_8864_94ca990f4933.slice. Aug 13 01:13:29.022274 systemd[1]: Created slice kubepods-burstable-pod56ce87f4_747f_470b_8388_a8400bdda009.slice - libcontainer container kubepods-burstable-pod56ce87f4_747f_470b_8388_a8400bdda009.slice. Aug 13 01:13:29.031594 systemd[1]: Created slice kubepods-besteffort-pod4046fe50_03fc_4dc4_b202_24ffe10b29f3.slice - libcontainer container kubepods-besteffort-pod4046fe50_03fc_4dc4_b202_24ffe10b29f3.slice. Aug 13 01:13:29.042669 systemd[1]: Created slice kubepods-besteffort-poda96ea724_76b7_4e38_a8d3_10c9057e0e09.slice - libcontainer container kubepods-besteffort-poda96ea724_76b7_4e38_a8d3_10c9057e0e09.slice. Aug 13 01:13:29.069912 kubelet[2774]: I0813 01:13:29.069722 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4046fe50-03fc-4dc4-b202-24ffe10b29f3-config\") pod \"goldmane-768f4c5c69-6jmrw\" (UID: \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\") " pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:13:29.069912 kubelet[2774]: I0813 01:13:29.069790 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4046fe50-03fc-4dc4-b202-24ffe10b29f3-goldmane-key-pair\") pod \"goldmane-768f4c5c69-6jmrw\" (UID: \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\") " pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:13:29.070367 kubelet[2774]: I0813 01:13:29.070161 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a96ea724-76b7-4e38-a8d3-10c9057e0e09-whisker-backend-key-pair\") pod \"whisker-6c85ff7dcf-vrmqf\" (UID: \"a96ea724-76b7-4e38-a8d3-10c9057e0e09\") " pod="calico-system/whisker-6c85ff7dcf-vrmqf" Aug 13 01:13:29.070367 kubelet[2774]: I0813 01:13:29.070240 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4046fe50-03fc-4dc4-b202-24ffe10b29f3-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-6jmrw\" (UID: \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\") " pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:13:29.070367 kubelet[2774]: I0813 01:13:29.070263 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713f2d42-12d1-4f50-bdc7-9964c95a9e2a-config-volume\") pod \"coredns-668d6bf9bc-qnwfr\" (UID: \"713f2d42-12d1-4f50-bdc7-9964c95a9e2a\") " pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:13:29.070367 kubelet[2774]: I0813 01:13:29.070321 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2857874-c208-419c-8864-94ca990f4933-calico-apiserver-certs\") pod \"calico-apiserver-bc89dd6cc-47gnp\" (UID: \"e2857874-c208-419c-8864-94ca990f4933\") " pod="calico-apiserver/calico-apiserver-bc89dd6cc-47gnp" Aug 13 01:13:29.070637 kubelet[2774]: I0813 01:13:29.070338 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a96ea724-76b7-4e38-a8d3-10c9057e0e09-whisker-ca-bundle\") pod \"whisker-6c85ff7dcf-vrmqf\" (UID: \"a96ea724-76b7-4e38-a8d3-10c9057e0e09\") " pod="calico-system/whisker-6c85ff7dcf-vrmqf" Aug 13 01:13:29.070637 kubelet[2774]: I0813 01:13:29.070549 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2xw\" (UniqueName: \"kubernetes.io/projected/4046fe50-03fc-4dc4-b202-24ffe10b29f3-kube-api-access-xj2xw\") pod \"goldmane-768f4c5c69-6jmrw\" (UID: \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\") " pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:13:29.070637 kubelet[2774]: I0813 01:13:29.070566 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdd42\" (UniqueName: \"kubernetes.io/projected/375926b9-4069-4b73-ae15-ed314a449202-kube-api-access-kdd42\") pod \"calico-apiserver-bc89dd6cc-jsght\" (UID: \"375926b9-4069-4b73-ae15-ed314a449202\") " pod="calico-apiserver/calico-apiserver-bc89dd6cc-jsght" Aug 13 01:13:29.070768 kubelet[2774]: I0813 01:13:29.070585 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/375926b9-4069-4b73-ae15-ed314a449202-calico-apiserver-certs\") pod \"calico-apiserver-bc89dd6cc-jsght\" (UID: \"375926b9-4069-4b73-ae15-ed314a449202\") " pod="calico-apiserver/calico-apiserver-bc89dd6cc-jsght" Aug 13 01:13:29.070957 kubelet[2774]: I0813 01:13:29.070896 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4t6n\" (UniqueName: \"kubernetes.io/projected/e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9-kube-api-access-j4t6n\") pod \"calico-kube-controllers-c5f875d88-fvdcx\" (UID: \"e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9\") " pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:13:29.071119 kubelet[2774]: I0813 01:13:29.071031 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmnw\" (UniqueName: \"kubernetes.io/projected/a96ea724-76b7-4e38-a8d3-10c9057e0e09-kube-api-access-zcmnw\") pod \"whisker-6c85ff7dcf-vrmqf\" (UID: \"a96ea724-76b7-4e38-a8d3-10c9057e0e09\") " pod="calico-system/whisker-6c85ff7dcf-vrmqf" Aug 13 01:13:29.071119 kubelet[2774]: I0813 01:13:29.071051 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gj6q\" (UniqueName: \"kubernetes.io/projected/56ce87f4-747f-470b-8388-a8400bdda009-kube-api-access-8gj6q\") pod \"coredns-668d6bf9bc-wp8jf\" (UID: \"56ce87f4-747f-470b-8388-a8400bdda009\") " pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:13:29.071221 kubelet[2774]: I0813 01:13:29.071208 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9-tigera-ca-bundle\") pod \"calico-kube-controllers-c5f875d88-fvdcx\" (UID: \"e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9\") " pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:13:29.071423 kubelet[2774]: I0813 01:13:29.071382 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnh4g\" (UniqueName: \"kubernetes.io/projected/713f2d42-12d1-4f50-bdc7-9964c95a9e2a-kube-api-access-wnh4g\") pod \"coredns-668d6bf9bc-qnwfr\" (UID: \"713f2d42-12d1-4f50-bdc7-9964c95a9e2a\") " pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:13:29.071527 kubelet[2774]: I0813 01:13:29.071472 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-842bg\" (UniqueName: \"kubernetes.io/projected/e2857874-c208-419c-8864-94ca990f4933-kube-api-access-842bg\") pod \"calico-apiserver-bc89dd6cc-47gnp\" (UID: \"e2857874-c208-419c-8864-94ca990f4933\") " pod="calico-apiserver/calico-apiserver-bc89dd6cc-47gnp" Aug 13 01:13:29.071527 kubelet[2774]: I0813 01:13:29.071496 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56ce87f4-747f-470b-8388-a8400bdda009-config-volume\") pod \"coredns-668d6bf9bc-wp8jf\" (UID: \"56ce87f4-747f-470b-8388-a8400bdda009\") " pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:13:29.279634 containerd[1543]: time="2025-08-13T01:13:29.279568406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:29.293406 kubelet[2774]: E0813 01:13:29.293186 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:29.295315 containerd[1543]: time="2025-08-13T01:13:29.295242320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:29.312211 containerd[1543]: time="2025-08-13T01:13:29.312132648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc89dd6cc-jsght,Uid:375926b9-4069-4b73-ae15-ed314a449202,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:13:29.341491 kubelet[2774]: E0813 01:13:29.337505 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:29.343145 containerd[1543]: time="2025-08-13T01:13:29.342841084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:29.343960 containerd[1543]: time="2025-08-13T01:13:29.343592847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc89dd6cc-47gnp,Uid:e2857874-c208-419c-8864-94ca990f4933,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:13:29.352992 containerd[1543]: time="2025-08-13T01:13:29.352661688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6jmrw,Uid:4046fe50-03fc-4dc4-b202-24ffe10b29f3,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:29.379925 containerd[1543]: time="2025-08-13T01:13:29.379823352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c85ff7dcf-vrmqf,Uid:a96ea724-76b7-4e38-a8d3-10c9057e0e09,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:29.554094 containerd[1543]: time="2025-08-13T01:13:29.554027472Z" level=error msg="Failed to destroy network for sandbox \"b0c5b471f48b271beb395c233fbd415f0f4b620d26dbf95bb0beeaccc4c97f52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.558144 containerd[1543]: time="2025-08-13T01:13:29.558106047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc89dd6cc-jsght,Uid:375926b9-4069-4b73-ae15-ed314a449202,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0c5b471f48b271beb395c233fbd415f0f4b620d26dbf95bb0beeaccc4c97f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.561140 kubelet[2774]: E0813 01:13:29.559838 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0c5b471f48b271beb395c233fbd415f0f4b620d26dbf95bb0beeaccc4c97f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.561140 kubelet[2774]: E0813 01:13:29.559935 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0c5b471f48b271beb395c233fbd415f0f4b620d26dbf95bb0beeaccc4c97f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc89dd6cc-jsght" Aug 13 01:13:29.561140 kubelet[2774]: E0813 01:13:29.559961 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0c5b471f48b271beb395c233fbd415f0f4b620d26dbf95bb0beeaccc4c97f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc89dd6cc-jsght" Aug 13 01:13:29.561639 kubelet[2774]: E0813 01:13:29.560019 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc89dd6cc-jsght_calico-apiserver(375926b9-4069-4b73-ae15-ed314a449202)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc89dd6cc-jsght_calico-apiserver(375926b9-4069-4b73-ae15-ed314a449202)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0c5b471f48b271beb395c233fbd415f0f4b620d26dbf95bb0beeaccc4c97f52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc89dd6cc-jsght" podUID="375926b9-4069-4b73-ae15-ed314a449202" Aug 13 01:13:29.649457 containerd[1543]: time="2025-08-13T01:13:29.649178571Z" level=error msg="Failed to destroy network for sandbox \"912c18a3bca4110cdd420b056f9880f6c7a161102e5ead6aaf158fd874ae7627\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.655221 containerd[1543]: time="2025-08-13T01:13:29.652520552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"912c18a3bca4110cdd420b056f9880f6c7a161102e5ead6aaf158fd874ae7627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.655423 kubelet[2774]: E0813 01:13:29.652777 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912c18a3bca4110cdd420b056f9880f6c7a161102e5ead6aaf158fd874ae7627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.655423 kubelet[2774]: E0813 01:13:29.652841 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912c18a3bca4110cdd420b056f9880f6c7a161102e5ead6aaf158fd874ae7627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:13:29.655423 kubelet[2774]: E0813 01:13:29.652867 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912c18a3bca4110cdd420b056f9880f6c7a161102e5ead6aaf158fd874ae7627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:13:29.655591 kubelet[2774]: E0813 01:13:29.652910 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"912c18a3bca4110cdd420b056f9880f6c7a161102e5ead6aaf158fd874ae7627\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" Aug 13 01:13:29.669696 containerd[1543]: time="2025-08-13T01:13:29.669594481Z" level=error msg="Failed to destroy network for sandbox \"9dac099011a176b8ac1c084f85c92cde5c648babbd5cd5c6cdb26f3a7947d389\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.670577 containerd[1543]: time="2025-08-13T01:13:29.670549734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dac099011a176b8ac1c084f85c92cde5c648babbd5cd5c6cdb26f3a7947d389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.671135 kubelet[2774]: E0813 01:13:29.670885 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dac099011a176b8ac1c084f85c92cde5c648babbd5cd5c6cdb26f3a7947d389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.671135 kubelet[2774]: E0813 01:13:29.670956 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dac099011a176b8ac1c084f85c92cde5c648babbd5cd5c6cdb26f3a7947d389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:13:29.671135 kubelet[2774]: E0813 01:13:29.670983 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dac099011a176b8ac1c084f85c92cde5c648babbd5cd5c6cdb26f3a7947d389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:13:29.671288 kubelet[2774]: E0813 01:13:29.671035 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dac099011a176b8ac1c084f85c92cde5c648babbd5cd5c6cdb26f3a7947d389\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:13:29.694342 containerd[1543]: time="2025-08-13T01:13:29.694287107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:13:29.703432 containerd[1543]: time="2025-08-13T01:13:29.702304234Z" level=error msg="Failed to destroy network for sandbox \"14835ce019cdba4baf75afa96e8f19c2166477feb2bcad24700c722c6b3fa912\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.703847 containerd[1543]: time="2025-08-13T01:13:29.703712350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6jmrw,Uid:4046fe50-03fc-4dc4-b202-24ffe10b29f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14835ce019cdba4baf75afa96e8f19c2166477feb2bcad24700c722c6b3fa912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.704332 kubelet[2774]: E0813 01:13:29.703908 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14835ce019cdba4baf75afa96e8f19c2166477feb2bcad24700c722c6b3fa912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.704332 kubelet[2774]: E0813 01:13:29.703993 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14835ce019cdba4baf75afa96e8f19c2166477feb2bcad24700c722c6b3fa912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:13:29.704332 kubelet[2774]: E0813 01:13:29.704014 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14835ce019cdba4baf75afa96e8f19c2166477feb2bcad24700c722c6b3fa912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:13:29.704470 kubelet[2774]: E0813 01:13:29.704076 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-6jmrw_calico-system(4046fe50-03fc-4dc4-b202-24ffe10b29f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-6jmrw_calico-system(4046fe50-03fc-4dc4-b202-24ffe10b29f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14835ce019cdba4baf75afa96e8f19c2166477feb2bcad24700c722c6b3fa912\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-6jmrw" podUID="4046fe50-03fc-4dc4-b202-24ffe10b29f3" Aug 13 01:13:29.712034 containerd[1543]: time="2025-08-13T01:13:29.711823707Z" level=error msg="Failed to destroy network for sandbox \"763f1a10bfca65e61c9ee4c308ccef2d0868fec372c8c5b6b7187e6fb47dbcf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.713603 containerd[1543]: time="2025-08-13T01:13:29.713575963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c85ff7dcf-vrmqf,Uid:a96ea724-76b7-4e38-a8d3-10c9057e0e09,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"763f1a10bfca65e61c9ee4c308ccef2d0868fec372c8c5b6b7187e6fb47dbcf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.718726 kubelet[2774]: E0813 01:13:29.717510 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"763f1a10bfca65e61c9ee4c308ccef2d0868fec372c8c5b6b7187e6fb47dbcf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.718726 kubelet[2774]: E0813 01:13:29.717569 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"763f1a10bfca65e61c9ee4c308ccef2d0868fec372c8c5b6b7187e6fb47dbcf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c85ff7dcf-vrmqf" Aug 13 01:13:29.718726 kubelet[2774]: E0813 01:13:29.717590 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"763f1a10bfca65e61c9ee4c308ccef2d0868fec372c8c5b6b7187e6fb47dbcf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c85ff7dcf-vrmqf" Aug 13 01:13:29.719059 kubelet[2774]: E0813 01:13:29.717629 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c85ff7dcf-vrmqf_calico-system(a96ea724-76b7-4e38-a8d3-10c9057e0e09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c85ff7dcf-vrmqf_calico-system(a96ea724-76b7-4e38-a8d3-10c9057e0e09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"763f1a10bfca65e61c9ee4c308ccef2d0868fec372c8c5b6b7187e6fb47dbcf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c85ff7dcf-vrmqf" podUID="a96ea724-76b7-4e38-a8d3-10c9057e0e09" Aug 13 01:13:29.741317 containerd[1543]: time="2025-08-13T01:13:29.741245649Z" level=error msg="Failed to destroy network for sandbox \"e7062113f46bef70fe2964be26549e64b3079557cfa5645ee94eaec6723b62a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.742696 containerd[1543]: time="2025-08-13T01:13:29.742659354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc89dd6cc-47gnp,Uid:e2857874-c208-419c-8864-94ca990f4933,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7062113f46bef70fe2964be26549e64b3079557cfa5645ee94eaec6723b62a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.742994 kubelet[2774]: E0813 01:13:29.742946 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7062113f46bef70fe2964be26549e64b3079557cfa5645ee94eaec6723b62a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.743073 kubelet[2774]: E0813 01:13:29.743022 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7062113f46bef70fe2964be26549e64b3079557cfa5645ee94eaec6723b62a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc89dd6cc-47gnp" Aug 13 01:13:29.743073 kubelet[2774]: E0813 01:13:29.743054 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7062113f46bef70fe2964be26549e64b3079557cfa5645ee94eaec6723b62a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc89dd6cc-47gnp" Aug 13 01:13:29.743146 kubelet[2774]: E0813 01:13:29.743114 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc89dd6cc-47gnp_calico-apiserver(e2857874-c208-419c-8864-94ca990f4933)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc89dd6cc-47gnp_calico-apiserver(e2857874-c208-419c-8864-94ca990f4933)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7062113f46bef70fe2964be26549e64b3079557cfa5645ee94eaec6723b62a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc89dd6cc-47gnp" podUID="e2857874-c208-419c-8864-94ca990f4933" Aug 13 01:13:29.745327 containerd[1543]: time="2025-08-13T01:13:29.745290943Z" level=error msg="Failed to destroy network for sandbox \"e3d4ba15e5e8028c6a3ad8a3147d80b47e38a6dcd0277fa80faa55c457089f1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.746294 containerd[1543]: time="2025-08-13T01:13:29.746186306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d4ba15e5e8028c6a3ad8a3147d80b47e38a6dcd0277fa80faa55c457089f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.746969 kubelet[2774]: E0813 01:13:29.746514 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d4ba15e5e8028c6a3ad8a3147d80b47e38a6dcd0277fa80faa55c457089f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.746969 kubelet[2774]: E0813 01:13:29.746554 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d4ba15e5e8028c6a3ad8a3147d80b47e38a6dcd0277fa80faa55c457089f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:13:29.746969 kubelet[2774]: E0813 01:13:29.746572 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d4ba15e5e8028c6a3ad8a3147d80b47e38a6dcd0277fa80faa55c457089f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:13:29.747143 kubelet[2774]: E0813 01:13:29.746600 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3d4ba15e5e8028c6a3ad8a3147d80b47e38a6dcd0277fa80faa55c457089f1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wp8jf" podUID="56ce87f4-747f-470b-8388-a8400bdda009" Aug 13 01:13:30.515632 systemd[1]: Created slice kubepods-besteffort-pode1069218_cdb9_4130_adce_4bdd23361a59.slice - libcontainer container kubepods-besteffort-pode1069218_cdb9_4130_adce_4bdd23361a59.slice. Aug 13 01:13:30.522424 containerd[1543]: time="2025-08-13T01:13:30.522218506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:30.704670 containerd[1543]: time="2025-08-13T01:13:30.704440999Z" level=error msg="Failed to destroy network for sandbox \"9d2f578830fef4770ea70461d3e56b126005be44895c2142b2a25626eb2b0633\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:30.712143 systemd[1]: run-netns-cni\x2d01d130ae\x2d2f9f\x2dfcda\x2daf99\x2df331dfd4c177.mount: Deactivated successfully. Aug 13 01:13:30.716033 containerd[1543]: time="2025-08-13T01:13:30.715991397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2f578830fef4770ea70461d3e56b126005be44895c2142b2a25626eb2b0633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:30.716492 kubelet[2774]: E0813 01:13:30.716439 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2f578830fef4770ea70461d3e56b126005be44895c2142b2a25626eb2b0633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:30.717182 kubelet[2774]: E0813 01:13:30.716520 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2f578830fef4770ea70461d3e56b126005be44895c2142b2a25626eb2b0633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:30.717182 kubelet[2774]: E0813 01:13:30.716555 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2f578830fef4770ea70461d3e56b126005be44895c2142b2a25626eb2b0633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:30.717182 kubelet[2774]: E0813 01:13:30.716599 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d2f578830fef4770ea70461d3e56b126005be44895c2142b2a25626eb2b0633\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:13:34.489729 containerd[1543]: time="2025-08-13T01:13:34.489578695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2906403807: write /var/lib/containerd/tmpmounts/containerd-mount2906403807/usr/bin/calico-node: no space left on device" Aug 13 01:13:34.489729 containerd[1543]: time="2025-08-13T01:13:34.489716666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:13:34.494106 kubelet[2774]: E0813 01:13:34.490284 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2906403807: write /var/lib/containerd/tmpmounts/containerd-mount2906403807/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:13:34.494106 kubelet[2774]: E0813 01:13:34.490463 2774 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2906403807: write /var/lib/containerd/tmpmounts/containerd-mount2906403807/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:13:34.491168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906403807.mount: Deactivated successfully. Aug 13 01:13:34.494964 kubelet[2774]: E0813 01:13:34.491014 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2tsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-9bc9j_calico-system(0909eaba-89ba-4b02-b2f3-a17e3b6e2afc): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2906403807: write /var/lib/containerd/tmpmounts/containerd-mount2906403807/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:13:34.495157 kubelet[2774]: E0813 01:13:34.493856 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2906403807: write /var/lib/containerd/tmpmounts/containerd-mount2906403807/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-9bc9j" podUID="0909eaba-89ba-4b02-b2f3-a17e3b6e2afc" Aug 13 01:13:36.766516 kubelet[2774]: I0813 01:13:36.766473 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:36.766516 kubelet[2774]: I0813 01:13:36.766522 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:13:36.769304 kubelet[2774]: I0813 01:13:36.769249 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:13:36.782586 kubelet[2774]: I0813 01:13:36.782551 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:36.782741 kubelet[2774]: I0813 01:13:36.782638 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-47gnp","calico-system/whisker-6c85ff7dcf-vrmqf","calico-system/goldmane-768f4c5c69-6jmrw","calico-apiserver/calico-apiserver-bc89dd6cc-jsght","kube-system/coredns-668d6bf9bc-wp8jf","calico-system/calico-kube-controllers-c5f875d88-fvdcx","kube-system/coredns-668d6bf9bc-qnwfr","calico-system/csi-node-driver-bc7dg","calico-system/calico-node-9bc9j","tigera-operator/tigera-operator-747864d56d-2rkjv","calico-system/calico-typha-bf7d6589c-n48dd","kube-system/kube-controller-manager-172-234-199-8","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8"] Aug 13 01:13:36.789705 kubelet[2774]: I0813 01:13:36.789678 2774 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-bc89dd6cc-47gnp" Aug 13 01:13:36.790211 kubelet[2774]: I0813 01:13:36.789708 2774 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-47gnp"] Aug 13 01:13:36.814149 kubelet[2774]: I0813 01:13:36.814072 2774 kubelet.go:2351] "Pod admission denied" podUID="f327cc87-6150-45a7-839f-5278616a8185" pod="calico-apiserver/calico-apiserver-bc89dd6cc-lp2sg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.846693 kubelet[2774]: I0813 01:13:36.845329 2774 kubelet.go:2351] "Pod admission denied" podUID="f7972e60-3b5e-4691-8e89-3e2e4fb796a0" pod="calico-apiserver/calico-apiserver-bc89dd6cc-tdb8s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.873846 kubelet[2774]: I0813 01:13:36.873773 2774 kubelet.go:2351] "Pod admission denied" podUID="da7ebbd9-8249-4c59-872f-bacbd6c007f0" pod="calico-apiserver/calico-apiserver-bc89dd6cc-htx4b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.880773 kubelet[2774]: I0813 01:13:36.880601 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-842bg\" (UniqueName: \"kubernetes.io/projected/e2857874-c208-419c-8864-94ca990f4933-kube-api-access-842bg\") pod \"e2857874-c208-419c-8864-94ca990f4933\" (UID: \"e2857874-c208-419c-8864-94ca990f4933\") " Aug 13 01:13:36.882331 kubelet[2774]: I0813 01:13:36.882302 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2857874-c208-419c-8864-94ca990f4933-calico-apiserver-certs\") pod \"e2857874-c208-419c-8864-94ca990f4933\" (UID: \"e2857874-c208-419c-8864-94ca990f4933\") " Aug 13 01:13:36.891374 kubelet[2774]: I0813 01:13:36.887833 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2857874-c208-419c-8864-94ca990f4933-kube-api-access-842bg" (OuterVolumeSpecName: "kube-api-access-842bg") pod "e2857874-c208-419c-8864-94ca990f4933" (UID: "e2857874-c208-419c-8864-94ca990f4933"). InnerVolumeSpecName "kube-api-access-842bg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:13:36.895302 kubelet[2774]: I0813 01:13:36.893263 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2857874-c208-419c-8864-94ca990f4933-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e2857874-c208-419c-8864-94ca990f4933" (UID: "e2857874-c208-419c-8864-94ca990f4933"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:13:36.894798 systemd[1]: var-lib-kubelet-pods-e2857874\x2dc208\x2d419c\x2d8864\x2d94ca990f4933-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d842bg.mount: Deactivated successfully. Aug 13 01:13:36.895106 systemd[1]: var-lib-kubelet-pods-e2857874\x2dc208\x2d419c\x2d8864\x2d94ca990f4933-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:13:36.916254 kubelet[2774]: I0813 01:13:36.916197 2774 kubelet.go:2351] "Pod admission denied" podUID="88987740-a671-4bc8-ab56-d1745b1374aa" pod="calico-apiserver/calico-apiserver-bc89dd6cc-s9vz4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.942315 kubelet[2774]: I0813 01:13:36.942256 2774 kubelet.go:2351] "Pod admission denied" podUID="f3a793f1-349f-4237-adb5-b8695d59a533" pod="calico-apiserver/calico-apiserver-bc89dd6cc-snkjs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.979777 kubelet[2774]: I0813 01:13:36.978722 2774 kubelet.go:2351] "Pod admission denied" podUID="5c0f7e39-4c31-4c54-ab7e-7fb5e3fb22df" pod="calico-apiserver/calico-apiserver-bc89dd6cc-bddnj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.983807 kubelet[2774]: I0813 01:13:36.983772 2774 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-842bg\" (UniqueName: \"kubernetes.io/projected/e2857874-c208-419c-8864-94ca990f4933-kube-api-access-842bg\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:13:36.983807 kubelet[2774]: I0813 01:13:36.983807 2774 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2857874-c208-419c-8864-94ca990f4933-calico-apiserver-certs\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:13:37.012098 kubelet[2774]: I0813 01:13:37.012038 2774 kubelet.go:2351] "Pod admission denied" podUID="dbc3e759-762e-457e-8f6a-6fc79e6b93c2" pod="calico-apiserver/calico-apiserver-bc89dd6cc-bm56w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.040331 kubelet[2774]: I0813 01:13:37.039758 2774 kubelet.go:2351] "Pod admission denied" podUID="82b2c20e-b057-4096-8d12-ead97e466c9f" pod="calico-apiserver/calico-apiserver-bc89dd6cc-6v2x2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.065437 kubelet[2774]: I0813 01:13:37.065382 2774 kubelet.go:2351] "Pod admission denied" podUID="d0591a1a-ccda-4205-a1ff-9356f1d1f27b" pod="calico-apiserver/calico-apiserver-bc89dd6cc-pr5qp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.168179 kubelet[2774]: I0813 01:13:37.167572 2774 kubelet.go:2351] "Pod admission denied" podUID="de19e197-3085-4724-a5d3-2ce026c9f157" pod="calico-apiserver/calico-apiserver-bc89dd6cc-8lvxp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.317612 kubelet[2774]: I0813 01:13:37.317426 2774 kubelet.go:2351] "Pod admission denied" podUID="11a3eeba-53cd-4955-bb61-e5a8a9d569b8" pod="calico-apiserver/calico-apiserver-bc89dd6cc-22nd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.464192 kubelet[2774]: I0813 01:13:37.464104 2774 kubelet.go:2351] "Pod admission denied" podUID="886c3bcc-1c78-49bc-86fb-78ccbbe8c9eb" pod="calico-apiserver/calico-apiserver-bc89dd6cc-jsnjn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.736174 systemd[1]: Removed slice kubepods-besteffort-pode2857874_c208_419c_8864_94ca990f4933.slice - libcontainer container kubepods-besteffort-pode2857874_c208_419c_8864_94ca990f4933.slice. Aug 13 01:13:37.789922 kubelet[2774]: I0813 01:13:37.789790 2774 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-47gnp"] Aug 13 01:13:40.504637 containerd[1543]: time="2025-08-13T01:13:40.504560926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc89dd6cc-jsght,Uid:375926b9-4069-4b73-ae15-ed314a449202,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:13:40.574741 containerd[1543]: time="2025-08-13T01:13:40.574671357Z" level=error msg="Failed to destroy network for sandbox \"96fa046a32e2bff1fe147b1ab830e4f3eb7228ff4b28e510ee6c739df9c6b2ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:40.577299 systemd[1]: run-netns-cni\x2ddd608401\x2d3d35\x2d7501\x2d14e1\x2d848bcb5d1fe1.mount: Deactivated successfully. Aug 13 01:13:40.579525 containerd[1543]: time="2025-08-13T01:13:40.579468330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc89dd6cc-jsght,Uid:375926b9-4069-4b73-ae15-ed314a449202,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96fa046a32e2bff1fe147b1ab830e4f3eb7228ff4b28e510ee6c739df9c6b2ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:40.580225 kubelet[2774]: E0813 01:13:40.580069 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96fa046a32e2bff1fe147b1ab830e4f3eb7228ff4b28e510ee6c739df9c6b2ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:40.580810 kubelet[2774]: E0813 01:13:40.580182 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96fa046a32e2bff1fe147b1ab830e4f3eb7228ff4b28e510ee6c739df9c6b2ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc89dd6cc-jsght" Aug 13 01:13:40.580863 kubelet[2774]: E0813 01:13:40.580814 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96fa046a32e2bff1fe147b1ab830e4f3eb7228ff4b28e510ee6c739df9c6b2ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc89dd6cc-jsght" Aug 13 01:13:40.580936 kubelet[2774]: E0813 01:13:40.580896 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc89dd6cc-jsght_calico-apiserver(375926b9-4069-4b73-ae15-ed314a449202)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc89dd6cc-jsght_calico-apiserver(375926b9-4069-4b73-ae15-ed314a449202)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96fa046a32e2bff1fe147b1ab830e4f3eb7228ff4b28e510ee6c739df9c6b2ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc89dd6cc-jsght" podUID="375926b9-4069-4b73-ae15-ed314a449202" Aug 13 01:13:41.505228 kubelet[2774]: E0813 01:13:41.504343 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:41.506188 containerd[1543]: time="2025-08-13T01:13:41.506105792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:41.506188 containerd[1543]: time="2025-08-13T01:13:41.506112072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:41.575955 containerd[1543]: time="2025-08-13T01:13:41.575683008Z" level=error msg="Failed to destroy network for sandbox \"5fe4f4f632dbd0a92b609218343236aaf08c607ec61d3a18e69a35abc3efff12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:41.580388 containerd[1543]: time="2025-08-13T01:13:41.579659918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe4f4f632dbd0a92b609218343236aaf08c607ec61d3a18e69a35abc3efff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:41.579823 systemd[1]: run-netns-cni\x2dcfcc49de\x2df34f\x2d7ddd\x2dda22\x2de647d348db0b.mount: Deactivated successfully. Aug 13 01:13:41.582188 kubelet[2774]: E0813 01:13:41.581566 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe4f4f632dbd0a92b609218343236aaf08c607ec61d3a18e69a35abc3efff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:41.582188 kubelet[2774]: E0813 01:13:41.581641 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe4f4f632dbd0a92b609218343236aaf08c607ec61d3a18e69a35abc3efff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:41.582188 kubelet[2774]: E0813 01:13:41.581668 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe4f4f632dbd0a92b609218343236aaf08c607ec61d3a18e69a35abc3efff12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:41.582188 kubelet[2774]: E0813 01:13:41.581714 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fe4f4f632dbd0a92b609218343236aaf08c607ec61d3a18e69a35abc3efff12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:13:41.592205 containerd[1543]: time="2025-08-13T01:13:41.592156442Z" level=error msg="Failed to destroy network for sandbox \"8ee94b20651e970a6744a3b38af327e7d7b8748888b2db410184a03c293285c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:41.595283 systemd[1]: run-netns-cni\x2d3d9b67d6\x2d0164\x2d96e3\x2daeef\x2ddef1e1e0d909.mount: Deactivated successfully. Aug 13 01:13:41.595718 containerd[1543]: time="2025-08-13T01:13:41.595685611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee94b20651e970a6744a3b38af327e7d7b8748888b2db410184a03c293285c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:41.598079 kubelet[2774]: E0813 01:13:41.596097 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee94b20651e970a6744a3b38af327e7d7b8748888b2db410184a03c293285c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:41.598079 kubelet[2774]: E0813 01:13:41.596155 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee94b20651e970a6744a3b38af327e7d7b8748888b2db410184a03c293285c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:13:41.598079 kubelet[2774]: E0813 01:13:41.596183 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ee94b20651e970a6744a3b38af327e7d7b8748888b2db410184a03c293285c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:13:41.598079 kubelet[2774]: E0813 01:13:41.596226 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ee94b20651e970a6744a3b38af327e7d7b8748888b2db410184a03c293285c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:13:42.505777 kubelet[2774]: E0813 01:13:42.504484 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:42.506874 containerd[1543]: time="2025-08-13T01:13:42.506556271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:42.567222 containerd[1543]: time="2025-08-13T01:13:42.567158240Z" level=error msg="Failed to destroy network for sandbox \"bf8dcdabae145bb6c836f2002cf304c27ee2f0b5bf313b3dd3f546ce4c41487c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:42.571268 systemd[1]: run-netns-cni\x2d361cbbec\x2d0dac\x2da7a2\x2d606f\x2d73ee410bce8f.mount: Deactivated successfully. Aug 13 01:13:42.572381 containerd[1543]: time="2025-08-13T01:13:42.572307423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8dcdabae145bb6c836f2002cf304c27ee2f0b5bf313b3dd3f546ce4c41487c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:42.572825 kubelet[2774]: E0813 01:13:42.572772 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8dcdabae145bb6c836f2002cf304c27ee2f0b5bf313b3dd3f546ce4c41487c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:42.573193 kubelet[2774]: E0813 01:13:42.573146 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8dcdabae145bb6c836f2002cf304c27ee2f0b5bf313b3dd3f546ce4c41487c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:13:42.573325 kubelet[2774]: E0813 01:13:42.573177 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8dcdabae145bb6c836f2002cf304c27ee2f0b5bf313b3dd3f546ce4c41487c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:13:42.574088 kubelet[2774]: E0813 01:13:42.573307 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf8dcdabae145bb6c836f2002cf304c27ee2f0b5bf313b3dd3f546ce4c41487c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wp8jf" podUID="56ce87f4-747f-470b-8388-a8400bdda009" Aug 13 01:13:43.504648 containerd[1543]: time="2025-08-13T01:13:43.504563121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6jmrw,Uid:4046fe50-03fc-4dc4-b202-24ffe10b29f3,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:43.504648 containerd[1543]: time="2025-08-13T01:13:43.504653321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:43.505149 containerd[1543]: time="2025-08-13T01:13:43.505079032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c85ff7dcf-vrmqf,Uid:a96ea724-76b7-4e38-a8d3-10c9057e0e09,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:43.601863 containerd[1543]: time="2025-08-13T01:13:43.601505282Z" level=error msg="Failed to destroy network for sandbox \"e516852c0e62ce2de5f11542197d1769dd35c14a01f8414cbf41f52ed8594c69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.605502 systemd[1]: run-netns-cni\x2d2a55ec5e\x2de7bc\x2dbb61\x2d54fc\x2daf6c41d8e0d6.mount: Deactivated successfully. Aug 13 01:13:43.606698 containerd[1543]: time="2025-08-13T01:13:43.606553015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e516852c0e62ce2de5f11542197d1769dd35c14a01f8414cbf41f52ed8594c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.607083 kubelet[2774]: E0813 01:13:43.607014 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e516852c0e62ce2de5f11542197d1769dd35c14a01f8414cbf41f52ed8594c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.607371 kubelet[2774]: E0813 01:13:43.607115 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e516852c0e62ce2de5f11542197d1769dd35c14a01f8414cbf41f52ed8594c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:13:43.607371 kubelet[2774]: E0813 01:13:43.607149 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e516852c0e62ce2de5f11542197d1769dd35c14a01f8414cbf41f52ed8594c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:13:43.607371 kubelet[2774]: E0813 01:13:43.607193 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e516852c0e62ce2de5f11542197d1769dd35c14a01f8414cbf41f52ed8594c69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" Aug 13 01:13:43.619758 containerd[1543]: time="2025-08-13T01:13:43.619589519Z" level=error msg="Failed to destroy network for sandbox \"690c62768c18f9ee4566469defdf664153d8512409e4f503f4c188dceb64f829\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.622859 systemd[1]: run-netns-cni\x2d5701eb9d\x2df606\x2d1879\x2d7ad3\x2d2b59d39cad89.mount: Deactivated successfully. Aug 13 01:13:43.624742 containerd[1543]: time="2025-08-13T01:13:43.624077681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c85ff7dcf-vrmqf,Uid:a96ea724-76b7-4e38-a8d3-10c9057e0e09,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"690c62768c18f9ee4566469defdf664153d8512409e4f503f4c188dceb64f829\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.625389 kubelet[2774]: E0813 01:13:43.625064 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690c62768c18f9ee4566469defdf664153d8512409e4f503f4c188dceb64f829\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.625389 kubelet[2774]: E0813 01:13:43.625135 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690c62768c18f9ee4566469defdf664153d8512409e4f503f4c188dceb64f829\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c85ff7dcf-vrmqf" Aug 13 01:13:43.625389 kubelet[2774]: E0813 01:13:43.625160 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690c62768c18f9ee4566469defdf664153d8512409e4f503f4c188dceb64f829\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c85ff7dcf-vrmqf" Aug 13 01:13:43.625389 kubelet[2774]: E0813 01:13:43.625216 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c85ff7dcf-vrmqf_calico-system(a96ea724-76b7-4e38-a8d3-10c9057e0e09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c85ff7dcf-vrmqf_calico-system(a96ea724-76b7-4e38-a8d3-10c9057e0e09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"690c62768c18f9ee4566469defdf664153d8512409e4f503f4c188dceb64f829\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c85ff7dcf-vrmqf" podUID="a96ea724-76b7-4e38-a8d3-10c9057e0e09" Aug 13 01:13:43.629370 containerd[1543]: time="2025-08-13T01:13:43.629181254Z" level=error msg="Failed to destroy network for sandbox \"89e947c4edff882001c322175eb62bee88074a4539a9cde24309c23cd02d812d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.631494 containerd[1543]: time="2025-08-13T01:13:43.631379940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6jmrw,Uid:4046fe50-03fc-4dc4-b202-24ffe10b29f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e947c4edff882001c322175eb62bee88074a4539a9cde24309c23cd02d812d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.633561 systemd[1]: run-netns-cni\x2d5f769468\x2d872b\x2d49a7\x2dac7e\x2da7ea9aadb2f6.mount: Deactivated successfully. Aug 13 01:13:43.634134 kubelet[2774]: E0813 01:13:43.633891 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e947c4edff882001c322175eb62bee88074a4539a9cde24309c23cd02d812d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.634198 kubelet[2774]: E0813 01:13:43.634147 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e947c4edff882001c322175eb62bee88074a4539a9cde24309c23cd02d812d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:13:43.634198 kubelet[2774]: E0813 01:13:43.634171 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e947c4edff882001c322175eb62bee88074a4539a9cde24309c23cd02d812d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:13:43.634260 kubelet[2774]: E0813 01:13:43.634209 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-6jmrw_calico-system(4046fe50-03fc-4dc4-b202-24ffe10b29f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-6jmrw_calico-system(4046fe50-03fc-4dc4-b202-24ffe10b29f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89e947c4edff882001c322175eb62bee88074a4539a9cde24309c23cd02d812d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-6jmrw" podUID="4046fe50-03fc-4dc4-b202-24ffe10b29f3" Aug 13 01:13:47.835412 kubelet[2774]: I0813 01:13:47.835290 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:47.836275 kubelet[2774]: I0813 01:13:47.835697 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:13:47.841306 kubelet[2774]: I0813 01:13:47.840565 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:13:47.888050 kubelet[2774]: I0813 01:13:47.887980 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:47.888282 kubelet[2774]: I0813 01:13:47.888153 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-jsght","calico-system/whisker-6c85ff7dcf-vrmqf","calico-system/goldmane-768f4c5c69-6jmrw","kube-system/coredns-668d6bf9bc-wp8jf","kube-system/coredns-668d6bf9bc-qnwfr","calico-system/calico-kube-controllers-c5f875d88-fvdcx","calico-system/csi-node-driver-bc7dg","calico-system/calico-node-9bc9j","tigera-operator/tigera-operator-747864d56d-2rkjv","calico-system/calico-typha-bf7d6589c-n48dd","kube-system/kube-controller-manager-172-234-199-8","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8"] Aug 13 01:13:47.902101 kubelet[2774]: I0813 01:13:47.901452 2774 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-bc89dd6cc-jsght" Aug 13 01:13:47.902101 kubelet[2774]: I0813 01:13:47.901491 2774 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-jsght"] Aug 13 01:13:48.009174 kubelet[2774]: I0813 01:13:48.009108 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/375926b9-4069-4b73-ae15-ed314a449202-calico-apiserver-certs\") pod \"375926b9-4069-4b73-ae15-ed314a449202\" (UID: \"375926b9-4069-4b73-ae15-ed314a449202\") " Aug 13 01:13:48.010795 kubelet[2774]: I0813 01:13:48.010641 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdd42\" (UniqueName: \"kubernetes.io/projected/375926b9-4069-4b73-ae15-ed314a449202-kube-api-access-kdd42\") pod \"375926b9-4069-4b73-ae15-ed314a449202\" (UID: \"375926b9-4069-4b73-ae15-ed314a449202\") " Aug 13 01:13:48.032659 kubelet[2774]: I0813 01:13:48.032553 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375926b9-4069-4b73-ae15-ed314a449202-kube-api-access-kdd42" (OuterVolumeSpecName: "kube-api-access-kdd42") pod "375926b9-4069-4b73-ae15-ed314a449202" (UID: "375926b9-4069-4b73-ae15-ed314a449202"). InnerVolumeSpecName "kube-api-access-kdd42". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:13:48.033683 kubelet[2774]: I0813 01:13:48.033616 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375926b9-4069-4b73-ae15-ed314a449202-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "375926b9-4069-4b73-ae15-ed314a449202" (UID: "375926b9-4069-4b73-ae15-ed314a449202"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:13:48.034762 systemd[1]: var-lib-kubelet-pods-375926b9\x2d4069\x2d4b73\x2dae15\x2ded314a449202-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkdd42.mount: Deactivated successfully. Aug 13 01:13:48.047493 systemd[1]: var-lib-kubelet-pods-375926b9\x2d4069\x2d4b73\x2dae15\x2ded314a449202-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:13:48.116176 kubelet[2774]: I0813 01:13:48.115568 2774 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/375926b9-4069-4b73-ae15-ed314a449202-calico-apiserver-certs\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:13:48.116176 kubelet[2774]: I0813 01:13:48.115630 2774 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kdd42\" (UniqueName: \"kubernetes.io/projected/375926b9-4069-4b73-ae15-ed314a449202-kube-api-access-kdd42\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:13:48.522116 systemd[1]: Removed slice kubepods-besteffort-pod375926b9_4069_4b73_ae15_ed314a449202.slice - libcontainer container kubepods-besteffort-pod375926b9_4069_4b73_ae15_ed314a449202.slice. Aug 13 01:13:48.914558 kubelet[2774]: I0813 01:13:48.902553 2774 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-bc89dd6cc-jsght"] Aug 13 01:13:49.510286 containerd[1543]: time="2025-08-13T01:13:49.510216797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:13:51.816639 containerd[1543]: time="2025-08-13T01:13:51.816478684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1938257510: write /var/lib/containerd/tmpmounts/containerd-mount1938257510/usr/bin/calico-node: no space left on device" Aug 13 01:13:51.817570 containerd[1543]: time="2025-08-13T01:13:51.817390087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:13:51.817677 kubelet[2774]: E0813 01:13:51.817642 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1938257510: write /var/lib/containerd/tmpmounts/containerd-mount1938257510/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:13:51.819401 kubelet[2774]: E0813 01:13:51.818244 2774 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1938257510: write /var/lib/containerd/tmpmounts/containerd-mount1938257510/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:13:51.819465 kubelet[2774]: E0813 01:13:51.818501 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2tsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-9bc9j_calico-system(0909eaba-89ba-4b02-b2f3-a17e3b6e2afc): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1938257510: write /var/lib/containerd/tmpmounts/containerd-mount1938257510/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:13:51.820207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1938257510.mount: Deactivated successfully. Aug 13 01:13:51.820507 kubelet[2774]: E0813 01:13:51.819633 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1938257510: write /var/lib/containerd/tmpmounts/containerd-mount1938257510/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-9bc9j" podUID="0909eaba-89ba-4b02-b2f3-a17e3b6e2afc" Aug 13 01:13:54.505424 containerd[1543]: time="2025-08-13T01:13:54.504848589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:54.587943 containerd[1543]: time="2025-08-13T01:13:54.587817589Z" level=error msg="Failed to destroy network for sandbox \"374a1217c2bf3f8bb06c004667298da7d0a8f23c548dea46099eaaf2804357ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:54.591407 containerd[1543]: time="2025-08-13T01:13:54.591342448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"374a1217c2bf3f8bb06c004667298da7d0a8f23c548dea46099eaaf2804357ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:54.592387 kubelet[2774]: E0813 01:13:54.592302 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374a1217c2bf3f8bb06c004667298da7d0a8f23c548dea46099eaaf2804357ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:54.592988 systemd[1]: run-netns-cni\x2d51e6c5ad\x2dc5da\x2d7981\x2df945\x2de63d3af894d3.mount: Deactivated successfully. Aug 13 01:13:54.596903 kubelet[2774]: E0813 01:13:54.593596 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374a1217c2bf3f8bb06c004667298da7d0a8f23c548dea46099eaaf2804357ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:54.596903 kubelet[2774]: E0813 01:13:54.593672 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374a1217c2bf3f8bb06c004667298da7d0a8f23c548dea46099eaaf2804357ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:13:54.596903 kubelet[2774]: E0813 01:13:54.593776 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"374a1217c2bf3f8bb06c004667298da7d0a8f23c548dea46099eaaf2804357ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:13:55.503505 kubelet[2774]: E0813 01:13:55.503461 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:55.504398 containerd[1543]: time="2025-08-13T01:13:55.504340917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:55.574803 containerd[1543]: time="2025-08-13T01:13:55.574736727Z" level=error msg="Failed to destroy network for sandbox \"0a7cbe35fa99698c096a21a2e0b7ca9d9e3f2fa74ea2e1093bb6b7c9414ad28d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:55.577076 systemd[1]: run-netns-cni\x2d1f5e704a\x2db69c\x2df725\x2df7f3\x2de30632703668.mount: Deactivated successfully. Aug 13 01:13:55.579907 containerd[1543]: time="2025-08-13T01:13:55.579682799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7cbe35fa99698c096a21a2e0b7ca9d9e3f2fa74ea2e1093bb6b7c9414ad28d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:55.580172 kubelet[2774]: E0813 01:13:55.580133 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7cbe35fa99698c096a21a2e0b7ca9d9e3f2fa74ea2e1093bb6b7c9414ad28d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:55.580263 kubelet[2774]: E0813 01:13:55.580190 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7cbe35fa99698c096a21a2e0b7ca9d9e3f2fa74ea2e1093bb6b7c9414ad28d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:13:55.580263 kubelet[2774]: E0813 01:13:55.580213 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7cbe35fa99698c096a21a2e0b7ca9d9e3f2fa74ea2e1093bb6b7c9414ad28d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:13:55.580415 kubelet[2774]: E0813 01:13:55.580262 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a7cbe35fa99698c096a21a2e0b7ca9d9e3f2fa74ea2e1093bb6b7c9414ad28d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wp8jf" podUID="56ce87f4-747f-470b-8388-a8400bdda009" Aug 13 01:13:56.505293 kubelet[2774]: E0813 01:13:56.504897 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:13:56.506449 containerd[1543]: time="2025-08-13T01:13:56.506393004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:56.575454 containerd[1543]: time="2025-08-13T01:13:56.575366660Z" level=error msg="Failed to destroy network for sandbox \"afbf72fd78314e7a14bfcb2c3cc0d76296f32952d67adf2a5fbeefbfcb3fee8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:56.579780 containerd[1543]: time="2025-08-13T01:13:56.579739949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbf72fd78314e7a14bfcb2c3cc0d76296f32952d67adf2a5fbeefbfcb3fee8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:56.579953 systemd[1]: run-netns-cni\x2df1afed43\x2d539c\x2de2d1\x2d6f74\x2d15c7f77fd88f.mount: Deactivated successfully. Aug 13 01:13:56.581621 kubelet[2774]: E0813 01:13:56.580486 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbf72fd78314e7a14bfcb2c3cc0d76296f32952d67adf2a5fbeefbfcb3fee8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:56.581621 kubelet[2774]: E0813 01:13:56.580571 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbf72fd78314e7a14bfcb2c3cc0d76296f32952d67adf2a5fbeefbfcb3fee8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:13:56.581621 kubelet[2774]: E0813 01:13:56.580592 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbf72fd78314e7a14bfcb2c3cc0d76296f32952d67adf2a5fbeefbfcb3fee8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:13:56.581621 kubelet[2774]: E0813 01:13:56.580644 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afbf72fd78314e7a14bfcb2c3cc0d76296f32952d67adf2a5fbeefbfcb3fee8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:13:57.504626 containerd[1543]: time="2025-08-13T01:13:57.504567176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c85ff7dcf-vrmqf,Uid:a96ea724-76b7-4e38-a8d3-10c9057e0e09,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:57.568401 containerd[1543]: time="2025-08-13T01:13:57.568317658Z" level=error msg="Failed to destroy network for sandbox \"04e6e87614632ec562363ade6e245e2e0235904382dd9a8650579581c1c7a96f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:57.570998 systemd[1]: run-netns-cni\x2db12f56bb\x2d280e\x2dab21\x2d4fc5\x2d634e454f24b6.mount: Deactivated successfully. Aug 13 01:13:57.571982 containerd[1543]: time="2025-08-13T01:13:57.571638106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c85ff7dcf-vrmqf,Uid:a96ea724-76b7-4e38-a8d3-10c9057e0e09,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e6e87614632ec562363ade6e245e2e0235904382dd9a8650579581c1c7a96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:57.573147 kubelet[2774]: E0813 01:13:57.572747 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e6e87614632ec562363ade6e245e2e0235904382dd9a8650579581c1c7a96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:57.573561 kubelet[2774]: E0813 01:13:57.573521 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e6e87614632ec562363ade6e245e2e0235904382dd9a8650579581c1c7a96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c85ff7dcf-vrmqf" Aug 13 01:13:57.573616 kubelet[2774]: E0813 01:13:57.573596 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e6e87614632ec562363ade6e245e2e0235904382dd9a8650579581c1c7a96f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c85ff7dcf-vrmqf" Aug 13 01:13:57.573742 kubelet[2774]: E0813 01:13:57.573678 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c85ff7dcf-vrmqf_calico-system(a96ea724-76b7-4e38-a8d3-10c9057e0e09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c85ff7dcf-vrmqf_calico-system(a96ea724-76b7-4e38-a8d3-10c9057e0e09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04e6e87614632ec562363ade6e245e2e0235904382dd9a8650579581c1c7a96f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c85ff7dcf-vrmqf" podUID="a96ea724-76b7-4e38-a8d3-10c9057e0e09" Aug 13 01:13:58.505329 containerd[1543]: time="2025-08-13T01:13:58.504978726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:58.505820 containerd[1543]: time="2025-08-13T01:13:58.505059096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6jmrw,Uid:4046fe50-03fc-4dc4-b202-24ffe10b29f3,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:58.578678 containerd[1543]: time="2025-08-13T01:13:58.577884229Z" level=error msg="Failed to destroy network for sandbox \"8874b351417633b273292d551e296719d7f9ff09c45b2684533e27e8d7b0b110\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:58.581580 systemd[1]: run-netns-cni\x2daf516bee\x2d62eb\x2de093\x2dc368\x2d38c72abb2937.mount: Deactivated successfully. Aug 13 01:13:58.583817 containerd[1543]: time="2025-08-13T01:13:58.582801240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6jmrw,Uid:4046fe50-03fc-4dc4-b202-24ffe10b29f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8874b351417633b273292d551e296719d7f9ff09c45b2684533e27e8d7b0b110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:58.585959 kubelet[2774]: E0813 01:13:58.585333 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8874b351417633b273292d551e296719d7f9ff09c45b2684533e27e8d7b0b110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:58.587779 kubelet[2774]: E0813 01:13:58.585840 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8874b351417633b273292d551e296719d7f9ff09c45b2684533e27e8d7b0b110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:13:58.587779 kubelet[2774]: E0813 01:13:58.586394 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8874b351417633b273292d551e296719d7f9ff09c45b2684533e27e8d7b0b110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:13:58.587779 kubelet[2774]: E0813 01:13:58.586460 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-6jmrw_calico-system(4046fe50-03fc-4dc4-b202-24ffe10b29f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-6jmrw_calico-system(4046fe50-03fc-4dc4-b202-24ffe10b29f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8874b351417633b273292d551e296719d7f9ff09c45b2684533e27e8d7b0b110\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-6jmrw" podUID="4046fe50-03fc-4dc4-b202-24ffe10b29f3" Aug 13 01:13:58.594449 containerd[1543]: time="2025-08-13T01:13:58.594385335Z" level=error msg="Failed to destroy network for sandbox \"75981085adfe6aa69de0ad633a2624bbb861b7e1ab897495b9162d0b7caa1501\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:58.598913 systemd[1]: run-netns-cni\x2d90c1cc1d\x2d9023\x2db7e5\x2d33dc\x2d3c2cfbecf164.mount: Deactivated successfully. Aug 13 01:13:58.599564 containerd[1543]: time="2025-08-13T01:13:58.599521826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75981085adfe6aa69de0ad633a2624bbb861b7e1ab897495b9162d0b7caa1501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:58.600503 kubelet[2774]: E0813 01:13:58.600008 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75981085adfe6aa69de0ad633a2624bbb861b7e1ab897495b9162d0b7caa1501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:58.600503 kubelet[2774]: E0813 01:13:58.600069 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75981085adfe6aa69de0ad633a2624bbb861b7e1ab897495b9162d0b7caa1501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:13:58.600503 kubelet[2774]: E0813 01:13:58.600096 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75981085adfe6aa69de0ad633a2624bbb861b7e1ab897495b9162d0b7caa1501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:13:58.600503 kubelet[2774]: E0813 01:13:58.600162 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75981085adfe6aa69de0ad633a2624bbb861b7e1ab897495b9162d0b7caa1501\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" Aug 13 01:13:58.949674 kubelet[2774]: I0813 01:13:58.949527 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:58.949674 kubelet[2774]: I0813 01:13:58.949573 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:13:58.951641 kubelet[2774]: I0813 01:13:58.951582 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:13:58.965839 kubelet[2774]: I0813 01:13:58.965802 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:58.966267 kubelet[2774]: I0813 01:13:58.965897 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-6c85ff7dcf-vrmqf","calico-system/goldmane-768f4c5c69-6jmrw","calico-system/calico-kube-controllers-c5f875d88-fvdcx","kube-system/coredns-668d6bf9bc-qnwfr","kube-system/coredns-668d6bf9bc-wp8jf","calico-system/csi-node-driver-bc7dg","calico-system/calico-node-9bc9j","tigera-operator/tigera-operator-747864d56d-2rkjv","calico-system/calico-typha-bf7d6589c-n48dd","kube-system/kube-controller-manager-172-234-199-8","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8"] Aug 13 01:13:58.973572 kubelet[2774]: I0813 01:13:58.973526 2774 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-6c85ff7dcf-vrmqf" Aug 13 01:13:58.973572 kubelet[2774]: I0813 01:13:58.973551 2774 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-6c85ff7dcf-vrmqf"] Aug 13 01:13:59.106877 kubelet[2774]: I0813 01:13:59.106811 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcmnw\" (UniqueName: \"kubernetes.io/projected/a96ea724-76b7-4e38-a8d3-10c9057e0e09-kube-api-access-zcmnw\") pod \"a96ea724-76b7-4e38-a8d3-10c9057e0e09\" (UID: \"a96ea724-76b7-4e38-a8d3-10c9057e0e09\") " Aug 13 01:13:59.106877 kubelet[2774]: I0813 01:13:59.106883 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a96ea724-76b7-4e38-a8d3-10c9057e0e09-whisker-backend-key-pair\") pod \"a96ea724-76b7-4e38-a8d3-10c9057e0e09\" (UID: \"a96ea724-76b7-4e38-a8d3-10c9057e0e09\") " Aug 13 01:13:59.107312 kubelet[2774]: I0813 01:13:59.106929 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a96ea724-76b7-4e38-a8d3-10c9057e0e09-whisker-ca-bundle\") pod \"a96ea724-76b7-4e38-a8d3-10c9057e0e09\" (UID: \"a96ea724-76b7-4e38-a8d3-10c9057e0e09\") " Aug 13 01:13:59.108741 kubelet[2774]: I0813 01:13:59.108659 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a96ea724-76b7-4e38-a8d3-10c9057e0e09-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a96ea724-76b7-4e38-a8d3-10c9057e0e09" (UID: "a96ea724-76b7-4e38-a8d3-10c9057e0e09"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:13:59.111660 kubelet[2774]: I0813 01:13:59.111639 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96ea724-76b7-4e38-a8d3-10c9057e0e09-kube-api-access-zcmnw" (OuterVolumeSpecName: "kube-api-access-zcmnw") pod "a96ea724-76b7-4e38-a8d3-10c9057e0e09" (UID: "a96ea724-76b7-4e38-a8d3-10c9057e0e09"). InnerVolumeSpecName "kube-api-access-zcmnw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:13:59.112524 kubelet[2774]: I0813 01:13:59.112496 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96ea724-76b7-4e38-a8d3-10c9057e0e09-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a96ea724-76b7-4e38-a8d3-10c9057e0e09" (UID: "a96ea724-76b7-4e38-a8d3-10c9057e0e09"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:13:59.114604 systemd[1]: var-lib-kubelet-pods-a96ea724\x2d76b7\x2d4e38\x2da8d3\x2d10c9057e0e09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzcmnw.mount: Deactivated successfully. Aug 13 01:13:59.114738 systemd[1]: var-lib-kubelet-pods-a96ea724\x2d76b7\x2d4e38\x2da8d3\x2d10c9057e0e09-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:13:59.207858 kubelet[2774]: I0813 01:13:59.207688 2774 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a96ea724-76b7-4e38-a8d3-10c9057e0e09-whisker-ca-bundle\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:13:59.207858 kubelet[2774]: I0813 01:13:59.207734 2774 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zcmnw\" (UniqueName: \"kubernetes.io/projected/a96ea724-76b7-4e38-a8d3-10c9057e0e09-kube-api-access-zcmnw\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:13:59.207858 kubelet[2774]: I0813 01:13:59.207755 2774 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a96ea724-76b7-4e38-a8d3-10c9057e0e09-whisker-backend-key-pair\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:13:59.802811 systemd[1]: Removed slice kubepods-besteffort-poda96ea724_76b7_4e38_a8d3_10c9057e0e09.slice - libcontainer container kubepods-besteffort-poda96ea724_76b7_4e38_a8d3_10c9057e0e09.slice. Aug 13 01:13:59.974682 kubelet[2774]: I0813 01:13:59.974602 2774 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-6c85ff7dcf-vrmqf"] Aug 13 01:14:03.507493 kubelet[2774]: E0813 01:14:03.507404 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1938257510: write /var/lib/containerd/tmpmounts/containerd-mount1938257510/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-9bc9j" podUID="0909eaba-89ba-4b02-b2f3-a17e3b6e2afc" Aug 13 01:14:07.504915 containerd[1543]: time="2025-08-13T01:14:07.504860858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:07.559078 containerd[1543]: time="2025-08-13T01:14:07.559016543Z" level=error msg="Failed to destroy network for sandbox \"319cfc020ee70a787dab71f46d8c542940426cd811c5972ba57501a9c0769602\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:07.563458 systemd[1]: run-netns-cni\x2d429e114f\x2dccf1\x2d14f3\x2d180d\x2d45d7c620e2f5.mount: Deactivated successfully. Aug 13 01:14:07.564061 containerd[1543]: time="2025-08-13T01:14:07.563560431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"319cfc020ee70a787dab71f46d8c542940426cd811c5972ba57501a9c0769602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:07.565556 kubelet[2774]: E0813 01:14:07.564232 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319cfc020ee70a787dab71f46d8c542940426cd811c5972ba57501a9c0769602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:07.565556 kubelet[2774]: E0813 01:14:07.564306 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319cfc020ee70a787dab71f46d8c542940426cd811c5972ba57501a9c0769602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:07.565556 kubelet[2774]: E0813 01:14:07.564332 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319cfc020ee70a787dab71f46d8c542940426cd811c5972ba57501a9c0769602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:07.565556 kubelet[2774]: E0813 01:14:07.564441 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"319cfc020ee70a787dab71f46d8c542940426cd811c5972ba57501a9c0769602\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:14:08.504817 kubelet[2774]: E0813 01:14:08.504338 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:09.504198 kubelet[2774]: E0813 01:14:09.504144 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:09.505543 containerd[1543]: time="2025-08-13T01:14:09.505495711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:09.607919 containerd[1543]: time="2025-08-13T01:14:09.606690777Z" level=error msg="Failed to destroy network for sandbox \"1b23f23e0ccb63f12b3a6a2162e61149626325f5a70a3c70c5056b7bbb9d4b2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:09.609886 containerd[1543]: time="2025-08-13T01:14:09.609833024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b23f23e0ccb63f12b3a6a2162e61149626325f5a70a3c70c5056b7bbb9d4b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:09.610603 kubelet[2774]: E0813 01:14:09.610552 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b23f23e0ccb63f12b3a6a2162e61149626325f5a70a3c70c5056b7bbb9d4b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:09.610697 kubelet[2774]: E0813 01:14:09.610638 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b23f23e0ccb63f12b3a6a2162e61149626325f5a70a3c70c5056b7bbb9d4b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:09.610697 kubelet[2774]: E0813 01:14:09.610681 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b23f23e0ccb63f12b3a6a2162e61149626325f5a70a3c70c5056b7bbb9d4b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:09.610780 kubelet[2774]: E0813 01:14:09.610741 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b23f23e0ccb63f12b3a6a2162e61149626325f5a70a3c70c5056b7bbb9d4b2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:14:09.613997 systemd[1]: run-netns-cni\x2d63cf0e74\x2db650\x2d961d\x2d900f\x2d0f0587fabcde.mount: Deactivated successfully. Aug 13 01:14:10.008956 kubelet[2774]: I0813 01:14:10.008901 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:14:10.008956 kubelet[2774]: I0813 01:14:10.008950 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:14:10.013263 kubelet[2774]: I0813 01:14:10.013200 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:14:10.031115 kubelet[2774]: I0813 01:14:10.031067 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:14:10.031948 kubelet[2774]: I0813 01:14:10.031239 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-768f4c5c69-6jmrw","kube-system/coredns-668d6bf9bc-wp8jf","kube-system/coredns-668d6bf9bc-qnwfr","calico-system/calico-kube-controllers-c5f875d88-fvdcx","calico-system/calico-node-9bc9j","calico-system/csi-node-driver-bc7dg","tigera-operator/tigera-operator-747864d56d-2rkjv","calico-system/calico-typha-bf7d6589c-n48dd","kube-system/kube-controller-manager-172-234-199-8","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8"] Aug 13 01:14:10.038225 kubelet[2774]: I0813 01:14:10.038205 2774 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-768f4c5c69-6jmrw" Aug 13 01:14:10.038225 kubelet[2774]: I0813 01:14:10.038227 2774 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-768f4c5c69-6jmrw"] Aug 13 01:14:10.185237 kubelet[2774]: I0813 01:14:10.185167 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4046fe50-03fc-4dc4-b202-24ffe10b29f3-goldmane-key-pair\") pod \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\" (UID: \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\") " Aug 13 01:14:10.185237 kubelet[2774]: I0813 01:14:10.185234 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4046fe50-03fc-4dc4-b202-24ffe10b29f3-config\") pod \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\" (UID: \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\") " Aug 13 01:14:10.185719 kubelet[2774]: I0813 01:14:10.185485 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4046fe50-03fc-4dc4-b202-24ffe10b29f3-goldmane-ca-bundle\") pod \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\" (UID: \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\") " Aug 13 01:14:10.185719 kubelet[2774]: I0813 01:14:10.185513 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj2xw\" (UniqueName: \"kubernetes.io/projected/4046fe50-03fc-4dc4-b202-24ffe10b29f3-kube-api-access-xj2xw\") pod \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\" (UID: \"4046fe50-03fc-4dc4-b202-24ffe10b29f3\") " Aug 13 01:14:10.186600 kubelet[2774]: I0813 01:14:10.186130 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4046fe50-03fc-4dc4-b202-24ffe10b29f3-config" (OuterVolumeSpecName: "config") pod "4046fe50-03fc-4dc4-b202-24ffe10b29f3" (UID: "4046fe50-03fc-4dc4-b202-24ffe10b29f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:14:10.187801 kubelet[2774]: I0813 01:14:10.187781 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4046fe50-03fc-4dc4-b202-24ffe10b29f3-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "4046fe50-03fc-4dc4-b202-24ffe10b29f3" (UID: "4046fe50-03fc-4dc4-b202-24ffe10b29f3"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:14:10.191679 systemd[1]: var-lib-kubelet-pods-4046fe50\x2d03fc\x2d4dc4\x2db202\x2d24ffe10b29f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxj2xw.mount: Deactivated successfully. Aug 13 01:14:10.194281 kubelet[2774]: I0813 01:14:10.194243 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4046fe50-03fc-4dc4-b202-24ffe10b29f3-kube-api-access-xj2xw" (OuterVolumeSpecName: "kube-api-access-xj2xw") pod "4046fe50-03fc-4dc4-b202-24ffe10b29f3" (UID: "4046fe50-03fc-4dc4-b202-24ffe10b29f3"). InnerVolumeSpecName "kube-api-access-xj2xw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:14:10.196104 kubelet[2774]: I0813 01:14:10.196001 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4046fe50-03fc-4dc4-b202-24ffe10b29f3-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "4046fe50-03fc-4dc4-b202-24ffe10b29f3" (UID: "4046fe50-03fc-4dc4-b202-24ffe10b29f3"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:14:10.196241 systemd[1]: var-lib-kubelet-pods-4046fe50\x2d03fc\x2d4dc4\x2db202\x2d24ffe10b29f3-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:14:10.286890 kubelet[2774]: I0813 01:14:10.286695 2774 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xj2xw\" (UniqueName: \"kubernetes.io/projected/4046fe50-03fc-4dc4-b202-24ffe10b29f3-kube-api-access-xj2xw\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:14:10.286890 kubelet[2774]: I0813 01:14:10.286740 2774 reconciler_common.go:299] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4046fe50-03fc-4dc4-b202-24ffe10b29f3-goldmane-key-pair\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:14:10.286890 kubelet[2774]: I0813 01:14:10.286753 2774 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4046fe50-03fc-4dc4-b202-24ffe10b29f3-config\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:14:10.286890 kubelet[2774]: I0813 01:14:10.286767 2774 reconciler_common.go:299] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4046fe50-03fc-4dc4-b202-24ffe10b29f3-goldmane-ca-bundle\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:14:10.505392 kubelet[2774]: E0813 01:14:10.504534 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:10.507336 containerd[1543]: time="2025-08-13T01:14:10.507188953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:10.520897 systemd[1]: Removed slice kubepods-besteffort-pod4046fe50_03fc_4dc4_b202_24ffe10b29f3.slice - libcontainer container kubepods-besteffort-pod4046fe50_03fc_4dc4_b202_24ffe10b29f3.slice. Aug 13 01:14:10.588722 containerd[1543]: time="2025-08-13T01:14:10.583932778Z" level=error msg="Failed to destroy network for sandbox \"8d408b7b228314555b96ade8504c5e29369bb25f345eea9d2d1a8f06ae3fd3c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:10.590183 containerd[1543]: time="2025-08-13T01:14:10.590147229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d408b7b228314555b96ade8504c5e29369bb25f345eea9d2d1a8f06ae3fd3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:10.590831 systemd[1]: run-netns-cni\x2d8b99f5b3\x2d7acd\x2d4a65\x2d9d8c\x2de7b4f8c38df9.mount: Deactivated successfully. Aug 13 01:14:10.591003 kubelet[2774]: E0813 01:14:10.590973 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d408b7b228314555b96ade8504c5e29369bb25f345eea9d2d1a8f06ae3fd3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:10.591145 kubelet[2774]: E0813 01:14:10.591124 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d408b7b228314555b96ade8504c5e29369bb25f345eea9d2d1a8f06ae3fd3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:10.591238 kubelet[2774]: E0813 01:14:10.591219 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d408b7b228314555b96ade8504c5e29369bb25f345eea9d2d1a8f06ae3fd3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:10.591677 kubelet[2774]: E0813 01:14:10.591631 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d408b7b228314555b96ade8504c5e29369bb25f345eea9d2d1a8f06ae3fd3c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wp8jf" podUID="56ce87f4-747f-470b-8388-a8400bdda009" Aug 13 01:14:11.038856 kubelet[2774]: I0813 01:14:11.038797 2774 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-768f4c5c69-6jmrw"] Aug 13 01:14:12.504988 containerd[1543]: time="2025-08-13T01:14:12.504916676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:12.566141 containerd[1543]: time="2025-08-13T01:14:12.566069084Z" level=error msg="Failed to destroy network for sandbox \"e4b5b1da1d7f53708550617c957ade47d20b9df19e43c5cf2fdb9ddda93f46ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:12.569758 systemd[1]: run-netns-cni\x2d8afbae68\x2da212\x2d3df0\x2d45f3\x2d3b5ae51b4c4d.mount: Deactivated successfully. Aug 13 01:14:12.570675 containerd[1543]: time="2025-08-13T01:14:12.570300042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b5b1da1d7f53708550617c957ade47d20b9df19e43c5cf2fdb9ddda93f46ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:12.571212 kubelet[2774]: E0813 01:14:12.571134 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b5b1da1d7f53708550617c957ade47d20b9df19e43c5cf2fdb9ddda93f46ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:12.571788 kubelet[2774]: E0813 01:14:12.571218 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b5b1da1d7f53708550617c957ade47d20b9df19e43c5cf2fdb9ddda93f46ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:12.571788 kubelet[2774]: E0813 01:14:12.571244 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b5b1da1d7f53708550617c957ade47d20b9df19e43c5cf2fdb9ddda93f46ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:12.572846 kubelet[2774]: E0813 01:14:12.572450 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4b5b1da1d7f53708550617c957ade47d20b9df19e43c5cf2fdb9ddda93f46ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" Aug 13 01:14:17.504985 kubelet[2774]: E0813 01:14:17.504939 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:17.506997 containerd[1543]: time="2025-08-13T01:14:17.506300354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:14:19.508429 containerd[1543]: time="2025-08-13T01:14:19.508203859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:19.722452 containerd[1543]: time="2025-08-13T01:14:19.722216118Z" level=error msg="Failed to destroy network for sandbox \"3cef1885c07ab5a54273e21641146e0be5a03cc7f470d3d7c8dbc7ec3173056c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:19.727083 systemd[1]: run-netns-cni\x2d56e92dd4\x2d2ad8\x2da1f7\x2d5b61\x2d502d7ba153b7.mount: Deactivated successfully. Aug 13 01:14:19.732267 containerd[1543]: time="2025-08-13T01:14:19.731971767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cef1885c07ab5a54273e21641146e0be5a03cc7f470d3d7c8dbc7ec3173056c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:19.733901 kubelet[2774]: E0813 01:14:19.733734 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cef1885c07ab5a54273e21641146e0be5a03cc7f470d3d7c8dbc7ec3173056c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:19.736239 kubelet[2774]: E0813 01:14:19.735457 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cef1885c07ab5a54273e21641146e0be5a03cc7f470d3d7c8dbc7ec3173056c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:19.736239 kubelet[2774]: E0813 01:14:19.735495 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cef1885c07ab5a54273e21641146e0be5a03cc7f470d3d7c8dbc7ec3173056c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:19.736239 kubelet[2774]: E0813 01:14:19.735563 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cef1885c07ab5a54273e21641146e0be5a03cc7f470d3d7c8dbc7ec3173056c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:14:20.523141 kubelet[2774]: E0813 01:14:20.505363 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:20.523141 kubelet[2774]: E0813 01:14:20.513808 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:20.523141 kubelet[2774]: E0813 01:14:20.514118 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:20.523601 containerd[1543]: time="2025-08-13T01:14:20.507919399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.738947 1517 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.740126 1517 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.740383 1517 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.741410 1517 omaha_request_params.cc:62] Current group set to beta Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.742027 1517 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.742041 1517 update_attempter.cc:643] Scheduling an action processor start. Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.742063 1517 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.742125 1517 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.742229 1517 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.742239 1517 omaha_request_action.cc:272] Request: Aug 13 01:14:20.745387 update_engine[1517]: Aug 13 01:14:20.745387 update_engine[1517]: Aug 13 01:14:20.745387 update_engine[1517]: Aug 13 01:14:20.745387 update_engine[1517]: Aug 13 01:14:20.745387 update_engine[1517]: Aug 13 01:14:20.745387 update_engine[1517]: Aug 13 01:14:20.745387 update_engine[1517]: Aug 13 01:14:20.745387 update_engine[1517]: Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.742276 1517 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.744599 1517 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:14:20.745387 update_engine[1517]: I20250813 01:14:20.745131 1517 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:14:20.747835 locksmithd[1550]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 01:14:20.774566 update_engine[1517]: E20250813 01:14:20.773891 1517 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:14:20.775306 update_engine[1517]: I20250813 01:14:20.775024 1517 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 01:14:20.778949 containerd[1543]: time="2025-08-13T01:14:20.778881855Z" level=error msg="Failed to destroy network for sandbox \"f412ff66f8878082ec18480dc65fdef2ec8c75fe8fa3c93dffed052afd2b5c6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:20.781252 systemd[1]: run-netns-cni\x2d6f06e5db\x2d6592\x2d52b3\x2d98d0\x2db3b6096f578e.mount: Deactivated successfully. Aug 13 01:14:20.784383 containerd[1543]: time="2025-08-13T01:14:20.784317007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f412ff66f8878082ec18480dc65fdef2ec8c75fe8fa3c93dffed052afd2b5c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:20.784746 kubelet[2774]: E0813 01:14:20.784612 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f412ff66f8878082ec18480dc65fdef2ec8c75fe8fa3c93dffed052afd2b5c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:20.784746 kubelet[2774]: E0813 01:14:20.784684 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f412ff66f8878082ec18480dc65fdef2ec8c75fe8fa3c93dffed052afd2b5c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:20.784746 kubelet[2774]: E0813 01:14:20.784706 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f412ff66f8878082ec18480dc65fdef2ec8c75fe8fa3c93dffed052afd2b5c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:20.785461 kubelet[2774]: E0813 01:14:20.785182 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f412ff66f8878082ec18480dc65fdef2ec8c75fe8fa3c93dffed052afd2b5c6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:14:21.080079 kubelet[2774]: I0813 01:14:21.079912 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:14:21.080951 kubelet[2774]: I0813 01:14:21.080936 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:14:21.083971 kubelet[2774]: I0813 01:14:21.083947 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:14:21.101404 kubelet[2774]: I0813 01:14:21.101365 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:14:21.101960 kubelet[2774]: I0813 01:14:21.101938 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-wp8jf","kube-system/coredns-668d6bf9bc-qnwfr","calico-system/calico-kube-controllers-c5f875d88-fvdcx","calico-system/calico-node-9bc9j","calico-system/csi-node-driver-bc7dg","tigera-operator/tigera-operator-747864d56d-2rkjv","calico-system/calico-typha-bf7d6589c-n48dd","kube-system/kube-controller-manager-172-234-199-8","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8"] Aug 13 01:14:21.102116 kubelet[2774]: E0813 01:14:21.102103 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:21.102180 kubelet[2774]: E0813 01:14:21.102171 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:21.102329 kubelet[2774]: E0813 01:14:21.102315 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:21.102407 kubelet[2774]: E0813 01:14:21.102398 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-9bc9j" Aug 13 01:14:21.102480 kubelet[2774]: E0813 01:14:21.102471 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:21.109457 containerd[1543]: time="2025-08-13T01:14:21.107226547Z" level=info msg="StopContainer for \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" with timeout 2 (s)" Aug 13 01:14:21.111011 containerd[1543]: time="2025-08-13T01:14:21.110989755Z" level=info msg="Stop container \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" with signal terminated" Aug 13 01:14:21.196992 systemd[1]: cri-containerd-8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36.scope: Deactivated successfully. Aug 13 01:14:21.197722 systemd[1]: cri-containerd-8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36.scope: Consumed 7.205s CPU time, 101.6M memory peak. Aug 13 01:14:21.203014 containerd[1543]: time="2025-08-13T01:14:21.202970299Z" level=info msg="received exit event container_id:\"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" id:\"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" pid:3091 exited_at:{seconds:1755047661 nanos:201607943}" Aug 13 01:14:21.203341 containerd[1543]: time="2025-08-13T01:14:21.203317535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" id:\"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" pid:3091 exited_at:{seconds:1755047661 nanos:201607943}" Aug 13 01:14:21.233719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36-rootfs.mount: Deactivated successfully. Aug 13 01:14:21.264099 containerd[1543]: time="2025-08-13T01:14:21.264044963Z" level=info msg="StopContainer for \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" returns successfully" Aug 13 01:14:21.266156 containerd[1543]: time="2025-08-13T01:14:21.265813436Z" level=info msg="StopPodSandbox for \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\"" Aug 13 01:14:21.266208 containerd[1543]: time="2025-08-13T01:14:21.266183092Z" level=info msg="Container to stop \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:14:21.283499 systemd[1]: cri-containerd-402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77.scope: Deactivated successfully. Aug 13 01:14:21.287698 containerd[1543]: time="2025-08-13T01:14:21.287185248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" id:\"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" pid:2931 exit_status:137 exited_at:{seconds:1755047661 nanos:286894524}" Aug 13 01:14:21.304333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929401159.mount: Deactivated successfully. Aug 13 01:14:21.306127 containerd[1543]: time="2025-08-13T01:14:21.305657599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount929401159: write /var/lib/containerd/tmpmounts/containerd-mount929401159/usr/bin/calico-node: no space left on device" Aug 13 01:14:21.306127 containerd[1543]: time="2025-08-13T01:14:21.305816412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:14:21.307063 kubelet[2774]: E0813 01:14:21.306445 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount929401159: write /var/lib/containerd/tmpmounts/containerd-mount929401159/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:14:21.307063 kubelet[2774]: E0813 01:14:21.306503 2774 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount929401159: write /var/lib/containerd/tmpmounts/containerd-mount929401159/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:14:21.307063 kubelet[2774]: E0813 01:14:21.306745 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2tsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-9bc9j_calico-system(0909eaba-89ba-4b02-b2f3-a17e3b6e2afc): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount929401159: write /var/lib/containerd/tmpmounts/containerd-mount929401159/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:14:21.308024 kubelet[2774]: E0813 01:14:21.307927 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount929401159: write /var/lib/containerd/tmpmounts/containerd-mount929401159/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-9bc9j" podUID="0909eaba-89ba-4b02-b2f3-a17e3b6e2afc" Aug 13 01:14:21.330114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77-rootfs.mount: Deactivated successfully. Aug 13 01:14:21.336558 containerd[1543]: time="2025-08-13T01:14:21.336337974Z" level=info msg="shim disconnected" id=402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77 namespace=k8s.io Aug 13 01:14:21.336558 containerd[1543]: time="2025-08-13T01:14:21.336395595Z" level=warning msg="cleaning up after shim disconnected" id=402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77 namespace=k8s.io Aug 13 01:14:21.336558 containerd[1543]: time="2025-08-13T01:14:21.336403875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:14:21.352135 containerd[1543]: time="2025-08-13T01:14:21.351997252Z" level=info msg="received exit event sandbox_id:\"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" exit_status:137 exited_at:{seconds:1755047661 nanos:286894524}" Aug 13 01:14:21.352881 containerd[1543]: time="2025-08-13T01:14:21.352509871Z" level=info msg="TearDown network for sandbox \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" successfully" Aug 13 01:14:21.352881 containerd[1543]: time="2025-08-13T01:14:21.352529452Z" level=info msg="StopPodSandbox for \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" returns successfully" Aug 13 01:14:21.358875 kubelet[2774]: I0813 01:14:21.358852 2774 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-747864d56d-2rkjv" Aug 13 01:14:21.359454 kubelet[2774]: I0813 01:14:21.359431 2774 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-2rkjv"] Aug 13 01:14:21.389650 kubelet[2774]: I0813 01:14:21.389599 2774 kubelet.go:2351] "Pod admission denied" podUID="c901d1d2-3481-45ff-8bed-38ab71d36ad0" pod="tigera-operator/tigera-operator-747864d56d-kmbf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.424380 kubelet[2774]: I0813 01:14:21.424276 2774 kubelet.go:2351] "Pod admission denied" podUID="40f52f9a-9239-4b32-9589-7bd2194ce06a" pod="tigera-operator/tigera-operator-747864d56d-gzzlv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.465610 kubelet[2774]: I0813 01:14:21.465109 2774 kubelet.go:2351] "Pod admission denied" podUID="28677dc7-51f0-463c-bd0a-6e9fb2c57940" pod="tigera-operator/tigera-operator-747864d56d-97wlz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.490400 kubelet[2774]: I0813 01:14:21.489664 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lswll\" (UniqueName: \"kubernetes.io/projected/fe383991-29fe-448b-9865-0e2a6fc84d2e-kube-api-access-lswll\") pod \"fe383991-29fe-448b-9865-0e2a6fc84d2e\" (UID: \"fe383991-29fe-448b-9865-0e2a6fc84d2e\") " Aug 13 01:14:21.491163 kubelet[2774]: I0813 01:14:21.491027 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fe383991-29fe-448b-9865-0e2a6fc84d2e-var-lib-calico\") pod \"fe383991-29fe-448b-9865-0e2a6fc84d2e\" (UID: \"fe383991-29fe-448b-9865-0e2a6fc84d2e\") " Aug 13 01:14:21.491163 kubelet[2774]: I0813 01:14:21.491125 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe383991-29fe-448b-9865-0e2a6fc84d2e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "fe383991-29fe-448b-9865-0e2a6fc84d2e" (UID: "fe383991-29fe-448b-9865-0e2a6fc84d2e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:14:21.500220 kubelet[2774]: I0813 01:14:21.500195 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe383991-29fe-448b-9865-0e2a6fc84d2e-kube-api-access-lswll" (OuterVolumeSpecName: "kube-api-access-lswll") pod "fe383991-29fe-448b-9865-0e2a6fc84d2e" (UID: "fe383991-29fe-448b-9865-0e2a6fc84d2e"). InnerVolumeSpecName "kube-api-access-lswll". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:14:21.507364 kubelet[2774]: I0813 01:14:21.507322 2774 kubelet.go:2351] "Pod admission denied" podUID="359e72ad-1adc-41f8-abac-821c9d7b8269" pod="tigera-operator/tigera-operator-747864d56d-vdht2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.515960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77-shm.mount: Deactivated successfully. Aug 13 01:14:21.516149 systemd[1]: var-lib-kubelet-pods-fe383991\x2d29fe\x2d448b\x2d9865\x2d0e2a6fc84d2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlswll.mount: Deactivated successfully. Aug 13 01:14:21.539294 kubelet[2774]: I0813 01:14:21.539249 2774 kubelet.go:2351] "Pod admission denied" podUID="c2f448ee-b8f5-4d05-8aca-6a09d41654b8" pod="tigera-operator/tigera-operator-747864d56d-v55ht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.541006 kubelet[2774]: I0813 01:14:21.540978 2774 status_manager.go:890] "Failed to get status for pod" podUID="c2f448ee-b8f5-4d05-8aca-6a09d41654b8" pod="tigera-operator/tigera-operator-747864d56d-v55ht" err="pods \"tigera-operator-747864d56d-v55ht\" is forbidden: User \"system:node:172-234-199-8\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-234-199-8' and this object" Aug 13 01:14:21.592101 kubelet[2774]: I0813 01:14:21.591911 2774 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fe383991-29fe-448b-9865-0e2a6fc84d2e-var-lib-calico\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:14:21.592101 kubelet[2774]: I0813 01:14:21.591945 2774 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lswll\" (UniqueName: \"kubernetes.io/projected/fe383991-29fe-448b-9865-0e2a6fc84d2e-kube-api-access-lswll\") on node \"172-234-199-8\" DevicePath \"\"" Aug 13 01:14:21.846591 kubelet[2774]: I0813 01:14:21.846368 2774 scope.go:117] "RemoveContainer" containerID="8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36" Aug 13 01:14:21.850691 containerd[1543]: time="2025-08-13T01:14:21.850643171Z" level=info msg="RemoveContainer for \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\"" Aug 13 01:14:21.855409 systemd[1]: Removed slice kubepods-besteffort-podfe383991_29fe_448b_9865_0e2a6fc84d2e.slice - libcontainer container kubepods-besteffort-podfe383991_29fe_448b_9865_0e2a6fc84d2e.slice. Aug 13 01:14:21.856007 systemd[1]: kubepods-besteffort-podfe383991_29fe_448b_9865_0e2a6fc84d2e.slice: Consumed 7.246s CPU time, 101.8M memory peak. Aug 13 01:14:21.857375 containerd[1543]: time="2025-08-13T01:14:21.856873256Z" level=info msg="RemoveContainer for \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" returns successfully" Aug 13 01:14:21.858875 kubelet[2774]: I0813 01:14:21.858797 2774 scope.go:117] "RemoveContainer" containerID="8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36" Aug 13 01:14:21.860144 containerd[1543]: time="2025-08-13T01:14:21.859345621Z" level=error msg="ContainerStatus for \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\": not found" Aug 13 01:14:21.860718 kubelet[2774]: E0813 01:14:21.860601 2774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\": not found" containerID="8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36" Aug 13 01:14:21.860940 kubelet[2774]: I0813 01:14:21.860661 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36"} err="failed to get container status \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c2d2eaaa23c0c279cabbe63af173eee552624917677f2d8263c27ba501dff36\": not found" Aug 13 01:14:21.894319 kubelet[2774]: I0813 01:14:21.894251 2774 kubelet.go:2351] "Pod admission denied" podUID="6eb3e669-ac0e-4c3a-9883-dc9bcc5a0e46" pod="tigera-operator/tigera-operator-747864d56d-mjfxt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.919920 kubelet[2774]: I0813 01:14:21.919870 2774 kubelet.go:2351] "Pod admission denied" podUID="3413e4b5-1fa2-40e2-bfb3-8e4c7d119996" pod="tigera-operator/tigera-operator-747864d56d-gmznp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.951550 kubelet[2774]: I0813 01:14:21.951495 2774 kubelet.go:2351] "Pod admission denied" podUID="6fe6e4e9-a226-4074-94a9-00616c0a9323" pod="tigera-operator/tigera-operator-747864d56d-fzvgs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.975325 kubelet[2774]: I0813 01:14:21.974132 2774 kubelet.go:2351] "Pod admission denied" podUID="84b65c8c-215b-42ea-b114-2c9125f517fa" pod="tigera-operator/tigera-operator-747864d56d-ckvw8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.013689 kubelet[2774]: I0813 01:14:22.013636 2774 kubelet.go:2351] "Pod admission denied" podUID="8ab3318e-bdfe-43c1-a123-5ad148d57a25" pod="tigera-operator/tigera-operator-747864d56d-szmr6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.045720 kubelet[2774]: I0813 01:14:22.045660 2774 kubelet.go:2351] "Pod admission denied" podUID="ec008822-a031-48f3-962c-22fa82be6031" pod="tigera-operator/tigera-operator-747864d56d-gqdw5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.134221 kubelet[2774]: I0813 01:14:22.133212 2774 kubelet.go:2351] "Pod admission denied" podUID="48ec9ebf-fcd4-465b-bd9d-915fc3894929" pod="tigera-operator/tigera-operator-747864d56d-fltg2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.284036 kubelet[2774]: I0813 01:14:22.283975 2774 kubelet.go:2351] "Pod admission denied" podUID="f9101960-71a8-4827-8a68-e4a9e98a64f6" pod="tigera-operator/tigera-operator-747864d56d-22jgr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.360408 kubelet[2774]: I0813 01:14:22.360312 2774 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-2rkjv"] Aug 13 01:14:22.442381 kubelet[2774]: I0813 01:14:22.442124 2774 kubelet.go:2351] "Pod admission denied" podUID="642a18f0-a91d-4890-9294-c2bc2500e25c" pod="tigera-operator/tigera-operator-747864d56d-lvx4n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.533102 kubelet[2774]: I0813 01:14:22.533033 2774 kubelet.go:2351] "Pod admission denied" podUID="f4696f17-9bbe-45f6-8c08-15c287fcb67e" pod="tigera-operator/tigera-operator-747864d56d-zjgq7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.688305 kubelet[2774]: I0813 01:14:22.688252 2774 kubelet.go:2351] "Pod admission denied" podUID="dda048c1-ddf0-424a-9a2b-6908f9f3355b" pod="tigera-operator/tigera-operator-747864d56d-mhwkx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.937391 kubelet[2774]: I0813 01:14:22.937295 2774 kubelet.go:2351] "Pod admission denied" podUID="19628536-b47f-4f44-9be1-d57c964638cf" pod="tigera-operator/tigera-operator-747864d56d-rhxll" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.084385 kubelet[2774]: I0813 01:14:23.084048 2774 kubelet.go:2351] "Pod admission denied" podUID="20119fed-9dcd-45bd-98ad-8762541d7dc7" pod="tigera-operator/tigera-operator-747864d56d-kl4cf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.184943 kubelet[2774]: I0813 01:14:23.184673 2774 kubelet.go:2351] "Pod admission denied" podUID="905785d1-a4f7-4daa-84b4-1e8239e6075a" pod="tigera-operator/tigera-operator-747864d56d-hdhj8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.337956 kubelet[2774]: I0813 01:14:23.336204 2774 kubelet.go:2351] "Pod admission denied" podUID="c06ef02f-24dc-4614-93e0-3951acec200c" pod="tigera-operator/tigera-operator-747864d56d-559hk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.486178 kubelet[2774]: I0813 01:14:23.486121 2774 kubelet.go:2351] "Pod admission denied" podUID="f4892e05-af0e-4a99-b3ed-f2a22895a3d0" pod="tigera-operator/tigera-operator-747864d56d-jd78p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.636009 kubelet[2774]: I0813 01:14:23.634995 2774 kubelet.go:2351] "Pod admission denied" podUID="1fec4de4-6d51-421d-8d6a-869434b063f3" pod="tigera-operator/tigera-operator-747864d56d-58pw4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.786244 kubelet[2774]: I0813 01:14:23.786194 2774 kubelet.go:2351] "Pod admission denied" podUID="c65907ff-b0aa-4b1c-a9a0-c2426845162f" pod="tigera-operator/tigera-operator-747864d56d-ntf4c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.935818 kubelet[2774]: I0813 01:14:23.935561 2774 kubelet.go:2351] "Pod admission denied" podUID="d2e57f59-24ae-4d71-8171-e6e9552cd18a" pod="tigera-operator/tigera-operator-747864d56d-kwf54" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.186660 kubelet[2774]: I0813 01:14:24.186448 2774 kubelet.go:2351] "Pod admission denied" podUID="7fb4484d-bd27-4590-86a3-8f1c9c9df032" pod="tigera-operator/tigera-operator-747864d56d-9nnjj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.284236 kubelet[2774]: I0813 01:14:24.284176 2774 kubelet.go:2351] "Pod admission denied" podUID="3c0324df-c0eb-4787-ae29-07702050cd92" pod="tigera-operator/tigera-operator-747864d56d-9qfbv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.385158 kubelet[2774]: I0813 01:14:24.385111 2774 kubelet.go:2351] "Pod admission denied" podUID="e6f2aeaf-bb87-44b3-ae28-55d962651f32" pod="tigera-operator/tigera-operator-747864d56d-q2gnm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.484116 kubelet[2774]: I0813 01:14:24.482806 2774 kubelet.go:2351] "Pod admission denied" podUID="b8c194c1-af61-4e5c-a3bb-cd8dd79fe947" pod="tigera-operator/tigera-operator-747864d56d-gbjvx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.585562 kubelet[2774]: I0813 01:14:24.585491 2774 kubelet.go:2351] "Pod admission denied" podUID="7b6ef7b1-37cc-47d8-9c3e-a027fe272b4f" pod="tigera-operator/tigera-operator-747864d56d-l5z6l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.684945 kubelet[2774]: I0813 01:14:24.684649 2774 kubelet.go:2351] "Pod admission denied" podUID="83be4293-211f-4bf5-ad13-6cea4bbb97a6" pod="tigera-operator/tigera-operator-747864d56d-d9htk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.786191 kubelet[2774]: I0813 01:14:24.785078 2774 kubelet.go:2351] "Pod admission denied" podUID="1ab9f46e-7ffa-4f79-a43f-a3159f91fda1" pod="tigera-operator/tigera-operator-747864d56d-7zvt9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.885942 kubelet[2774]: I0813 01:14:24.885864 2774 kubelet.go:2351] "Pod admission denied" podUID="f4df7cf1-42d1-4c6b-9a33-5154e7c9c0b8" pod="tigera-operator/tigera-operator-747864d56d-l9v8t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.986406 kubelet[2774]: I0813 01:14:24.986337 2774 kubelet.go:2351] "Pod admission denied" podUID="c71f311a-708e-43e2-bdd1-6b9be90eb967" pod="tigera-operator/tigera-operator-747864d56d-9fljb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.191407 kubelet[2774]: I0813 01:14:25.190632 2774 kubelet.go:2351] "Pod admission denied" podUID="9c916efa-99be-467d-9a7d-2371f4f2f4a6" pod="tigera-operator/tigera-operator-747864d56d-trtt6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.290198 kubelet[2774]: I0813 01:14:25.290137 2774 kubelet.go:2351] "Pod admission denied" podUID="b704d818-25f6-43c0-98ec-1a1fec602bc2" pod="tigera-operator/tigera-operator-747864d56d-9jfxf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.385019 kubelet[2774]: I0813 01:14:25.384947 2774 kubelet.go:2351] "Pod admission denied" podUID="9f4bfa5e-5b56-44b8-a243-bf807516ffd2" pod="tigera-operator/tigera-operator-747864d56d-nt4vw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.504533 kubelet[2774]: E0813 01:14:25.504178 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:25.505211 containerd[1543]: time="2025-08-13T01:14:25.505153629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:25.587855 containerd[1543]: time="2025-08-13T01:14:25.587687105Z" level=error msg="Failed to destroy network for sandbox \"0c441a05bdf8a0258b4fa4f8afb4c4e61baf5888a502cc7f1a5966748ccb7fd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:25.591262 systemd[1]: run-netns-cni\x2d19c7d7aa\x2dd5c0\x2d7054\x2ddb2d\x2d4dde63502cb6.mount: Deactivated successfully. Aug 13 01:14:25.594455 containerd[1543]: time="2025-08-13T01:14:25.594231175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c441a05bdf8a0258b4fa4f8afb4c4e61baf5888a502cc7f1a5966748ccb7fd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:25.595127 kubelet[2774]: E0813 01:14:25.595071 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c441a05bdf8a0258b4fa4f8afb4c4e61baf5888a502cc7f1a5966748ccb7fd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:25.595203 kubelet[2774]: E0813 01:14:25.595153 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c441a05bdf8a0258b4fa4f8afb4c4e61baf5888a502cc7f1a5966748ccb7fd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:25.595203 kubelet[2774]: E0813 01:14:25.595178 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c441a05bdf8a0258b4fa4f8afb4c4e61baf5888a502cc7f1a5966748ccb7fd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:25.595254 kubelet[2774]: E0813 01:14:25.595221 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c441a05bdf8a0258b4fa4f8afb4c4e61baf5888a502cc7f1a5966748ccb7fd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wp8jf" podUID="56ce87f4-747f-470b-8388-a8400bdda009" Aug 13 01:14:25.597491 kubelet[2774]: I0813 01:14:25.596481 2774 kubelet.go:2351] "Pod admission denied" podUID="eac9ed48-885c-412d-b532-3658380a8565" pod="tigera-operator/tigera-operator-747864d56d-p52ln" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.687314 kubelet[2774]: I0813 01:14:25.687237 2774 kubelet.go:2351] "Pod admission denied" podUID="e0c54d22-a6f4-4fb0-8c22-b691ca5049dc" pod="tigera-operator/tigera-operator-747864d56d-xt5mz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.785510 kubelet[2774]: I0813 01:14:25.784673 2774 kubelet.go:2351] "Pod admission denied" podUID="113b74cf-8f79-4c0d-83ac-5901e5c8208d" pod="tigera-operator/tigera-operator-747864d56d-w8pwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.886886 kubelet[2774]: I0813 01:14:25.886798 2774 kubelet.go:2351] "Pod admission denied" podUID="e23d500a-91b8-4370-a2cd-908905adbaa1" pod="tigera-operator/tigera-operator-747864d56d-n8hbr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.936652 kubelet[2774]: I0813 01:14:25.936526 2774 kubelet.go:2351] "Pod admission denied" podUID="dba72463-25c8-4236-adb7-aed1bb27ecef" pod="tigera-operator/tigera-operator-747864d56d-pfntk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.036258 kubelet[2774]: I0813 01:14:26.036093 2774 kubelet.go:2351] "Pod admission denied" podUID="81ea345e-651d-44d4-8a5c-a6b4051c24b3" pod="tigera-operator/tigera-operator-747864d56d-fbg45" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.134831 kubelet[2774]: I0813 01:14:26.134780 2774 kubelet.go:2351] "Pod admission denied" podUID="d64fc529-5efe-41f5-89db-934eab47ec26" pod="tigera-operator/tigera-operator-747864d56d-hmvv6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.234156 kubelet[2774]: I0813 01:14:26.234097 2774 kubelet.go:2351] "Pod admission denied" podUID="107bb75b-4537-4b4c-82d1-c7eaedb3bf10" pod="tigera-operator/tigera-operator-747864d56d-zbk82" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.437163 kubelet[2774]: I0813 01:14:26.437078 2774 kubelet.go:2351] "Pod admission denied" podUID="dc9adb21-079d-457b-9ce9-5468c0894b9d" pod="tigera-operator/tigera-operator-747864d56d-2kqdj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.536401 kubelet[2774]: I0813 01:14:26.536318 2774 kubelet.go:2351] "Pod admission denied" podUID="a6abe099-51d2-4887-a2e1-3ea1dabeaeb3" pod="tigera-operator/tigera-operator-747864d56d-chnwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.636222 kubelet[2774]: I0813 01:14:26.636159 2774 kubelet.go:2351] "Pod admission denied" podUID="ef1c5f70-1fad-4ce1-9c83-495cad48dd77" pod="tigera-operator/tigera-operator-747864d56d-95nld" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.835887 kubelet[2774]: I0813 01:14:26.835623 2774 kubelet.go:2351] "Pod admission denied" podUID="9046d32a-91ba-4775-a41d-f45dadbc093d" pod="tigera-operator/tigera-operator-747864d56d-csdhw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.935318 kubelet[2774]: I0813 01:14:26.935262 2774 kubelet.go:2351] "Pod admission denied" podUID="dd8b17e5-26d5-497d-bbcd-ce9b0c5a54fc" pod="tigera-operator/tigera-operator-747864d56d-26bth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.985430 kubelet[2774]: I0813 01:14:26.985376 2774 kubelet.go:2351] "Pod admission denied" podUID="08f54b6c-e6ef-48ce-bec9-d4249dd531e6" pod="tigera-operator/tigera-operator-747864d56d-ns8l6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.085758 kubelet[2774]: I0813 01:14:27.085710 2774 kubelet.go:2351] "Pod admission denied" podUID="279bcfdd-d499-4636-8e39-ec21cc276128" pod="tigera-operator/tigera-operator-747864d56d-4hpk6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.184457 kubelet[2774]: I0813 01:14:27.184400 2774 kubelet.go:2351] "Pod admission denied" podUID="f64ea7a1-eedd-4914-9116-a0ea1f4e2d66" pod="tigera-operator/tigera-operator-747864d56d-p9679" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.235702 kubelet[2774]: I0813 01:14:27.235628 2774 kubelet.go:2351] "Pod admission denied" podUID="1c012ddc-901f-49ed-a48f-2f5b68fa6ec0" pod="tigera-operator/tigera-operator-747864d56d-b4bs9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.335680 kubelet[2774]: I0813 01:14:27.335615 2774 kubelet.go:2351] "Pod admission denied" podUID="f009be81-fb07-4ed5-985a-4b06eefbed76" pod="tigera-operator/tigera-operator-747864d56d-vgspv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.437929 kubelet[2774]: I0813 01:14:27.437766 2774 kubelet.go:2351] "Pod admission denied" podUID="53b1a704-da0c-4a96-8c65-f49a8c6e0359" pod="tigera-operator/tigera-operator-747864d56d-tbml6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.504370 containerd[1543]: time="2025-08-13T01:14:27.504291478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:27.551256 kubelet[2774]: I0813 01:14:27.551104 2774 kubelet.go:2351] "Pod admission denied" podUID="934cb981-8ca7-4c0e-b2f7-a6df30ff7fd6" pod="tigera-operator/tigera-operator-747864d56d-pfdqj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.584794 containerd[1543]: time="2025-08-13T01:14:27.584716060Z" level=error msg="Failed to destroy network for sandbox \"a3f0982275e9e3d8a05944a9ad93f6cfbc08595c529a12bf4007f5bf151dbbbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:27.589266 systemd[1]: run-netns-cni\x2de5c15d94\x2d790e\x2d8af9\x2de2c1\x2d8d60dab8b01d.mount: Deactivated successfully. Aug 13 01:14:27.593225 containerd[1543]: time="2025-08-13T01:14:27.593172366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f0982275e9e3d8a05944a9ad93f6cfbc08595c529a12bf4007f5bf151dbbbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:27.594255 kubelet[2774]: E0813 01:14:27.594186 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f0982275e9e3d8a05944a9ad93f6cfbc08595c529a12bf4007f5bf151dbbbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:27.595051 kubelet[2774]: E0813 01:14:27.594481 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f0982275e9e3d8a05944a9ad93f6cfbc08595c529a12bf4007f5bf151dbbbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:27.595051 kubelet[2774]: E0813 01:14:27.594741 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f0982275e9e3d8a05944a9ad93f6cfbc08595c529a12bf4007f5bf151dbbbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:27.595051 kubelet[2774]: E0813 01:14:27.594800 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3f0982275e9e3d8a05944a9ad93f6cfbc08595c529a12bf4007f5bf151dbbbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" Aug 13 01:14:27.636630 kubelet[2774]: I0813 01:14:27.636572 2774 kubelet.go:2351] "Pod admission denied" podUID="5cd7a33b-8a52-4850-af6d-b5c453240b22" pod="tigera-operator/tigera-operator-747864d56d-lstsr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.737506 kubelet[2774]: I0813 01:14:27.737059 2774 kubelet.go:2351] "Pod admission denied" podUID="66547609-7f77-4849-8b30-85de5b432fb3" pod="tigera-operator/tigera-operator-747864d56d-ddx5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.937374 kubelet[2774]: I0813 01:14:27.937298 2774 kubelet.go:2351] "Pod admission denied" podUID="ed555491-8efc-4ecb-9b4a-95e5183e1722" pod="tigera-operator/tigera-operator-747864d56d-94dnv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.036701 kubelet[2774]: I0813 01:14:28.036531 2774 kubelet.go:2351] "Pod admission denied" podUID="ff3c02f8-a362-4b2e-8ef7-c9d7275889ba" pod="tigera-operator/tigera-operator-747864d56d-gmrbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.135922 kubelet[2774]: I0813 01:14:28.135867 2774 kubelet.go:2351] "Pod admission denied" podUID="d2308020-4422-4881-865f-c99f43569771" pod="tigera-operator/tigera-operator-747864d56d-s6nb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.237131 kubelet[2774]: I0813 01:14:28.236757 2774 kubelet.go:2351] "Pod admission denied" podUID="3f198279-f7c4-4f70-8972-47f73ec57af7" pod="tigera-operator/tigera-operator-747864d56d-qlz7z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.338242 kubelet[2774]: I0813 01:14:28.338072 2774 kubelet.go:2351] "Pod admission denied" podUID="10510744-7f1c-4a6f-9a0e-1f5adabd7877" pod="tigera-operator/tigera-operator-747864d56d-mrzpn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.437530 kubelet[2774]: I0813 01:14:28.437397 2774 kubelet.go:2351] "Pod admission denied" podUID="d5d9173a-02d6-4ced-beb5-6e0c2ebfa642" pod="tigera-operator/tigera-operator-747864d56d-z5dsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.539485 kubelet[2774]: I0813 01:14:28.539410 2774 kubelet.go:2351] "Pod admission denied" podUID="064ebabd-e7f2-4a4b-9783-53da748b5687" pod="tigera-operator/tigera-operator-747864d56d-hjh6m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.635799 kubelet[2774]: I0813 01:14:28.635635 2774 kubelet.go:2351] "Pod admission denied" podUID="90a3025a-3f41-4d78-98f6-d3d43abf85e0" pod="tigera-operator/tigera-operator-747864d56d-8chp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.689406 kubelet[2774]: I0813 01:14:28.688649 2774 kubelet.go:2351] "Pod admission denied" podUID="ca703845-33c8-4dc7-80e3-bbfcf3239c75" pod="tigera-operator/tigera-operator-747864d56d-dntc8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.787389 kubelet[2774]: I0813 01:14:28.787307 2774 kubelet.go:2351] "Pod admission denied" podUID="2ae24491-092a-4093-830b-cf87c37b0f52" pod="tigera-operator/tigera-operator-747864d56d-cprt6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.887268 kubelet[2774]: I0813 01:14:28.887073 2774 kubelet.go:2351] "Pod admission denied" podUID="71fdd3b1-0a98-4e28-831c-1fd4b0832551" pod="tigera-operator/tigera-operator-747864d56d-ccs68" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.987372 kubelet[2774]: I0813 01:14:28.987288 2774 kubelet.go:2351] "Pod admission denied" podUID="4d9aed69-b0a1-44fd-8dd9-41ffebba8200" pod="tigera-operator/tigera-operator-747864d56d-7nxm5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.190679 kubelet[2774]: I0813 01:14:29.190605 2774 kubelet.go:2351] "Pod admission denied" podUID="428abd51-38be-498d-9cc0-0cdb108eaa6f" pod="tigera-operator/tigera-operator-747864d56d-lj587" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.286651 kubelet[2774]: I0813 01:14:29.286586 2774 kubelet.go:2351] "Pod admission denied" podUID="324e1794-8e20-44a2-acdc-d28642e15485" pod="tigera-operator/tigera-operator-747864d56d-6rdcb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.336553 kubelet[2774]: I0813 01:14:29.336485 2774 kubelet.go:2351] "Pod admission denied" podUID="2d247466-d659-452d-89cb-159289a38fab" pod="tigera-operator/tigera-operator-747864d56d-w8q2t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.437598 kubelet[2774]: I0813 01:14:29.437515 2774 kubelet.go:2351] "Pod admission denied" podUID="acd776bc-41cf-4028-b070-9a5b88fc90f7" pod="tigera-operator/tigera-operator-747864d56d-l77bz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.504207 kubelet[2774]: E0813 01:14:29.503854 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:29.535885 kubelet[2774]: I0813 01:14:29.535824 2774 kubelet.go:2351] "Pod admission denied" podUID="db53d427-883d-4949-aca5-8451aee8a702" pod="tigera-operator/tigera-operator-747864d56d-69tbk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.589930 kubelet[2774]: I0813 01:14:29.589460 2774 kubelet.go:2351] "Pod admission denied" podUID="a3552f08-79ce-43ed-b643-99cb585852ea" pod="tigera-operator/tigera-operator-747864d56d-vlbpb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.686148 kubelet[2774]: I0813 01:14:29.686069 2774 kubelet.go:2351] "Pod admission denied" podUID="3444583c-ae06-407b-bd61-263c7983d1c7" pod="tigera-operator/tigera-operator-747864d56d-m6r5r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.889626 kubelet[2774]: I0813 01:14:29.889409 2774 kubelet.go:2351] "Pod admission denied" podUID="28b0414d-537e-4334-ab08-ec05df0dd153" pod="tigera-operator/tigera-operator-747864d56d-jcqsx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.986380 kubelet[2774]: I0813 01:14:29.985773 2774 kubelet.go:2351] "Pod admission denied" podUID="11837c46-08d6-417d-b205-492b0fd3231f" pod="tigera-operator/tigera-operator-747864d56d-pfs6n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.086381 kubelet[2774]: I0813 01:14:30.086319 2774 kubelet.go:2351] "Pod admission denied" podUID="23e142e7-8787-469c-a0a3-d5bfb410ceb8" pod="tigera-operator/tigera-operator-747864d56d-6pm8m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.198933 kubelet[2774]: I0813 01:14:30.198856 2774 kubelet.go:2351] "Pod admission denied" podUID="c77c3d07-4ddc-47ab-b38b-469ac3da7347" pod="tigera-operator/tigera-operator-747864d56d-jn56k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.291991 kubelet[2774]: I0813 01:14:30.291533 2774 kubelet.go:2351] "Pod admission denied" podUID="35e7bfec-09a8-40bf-9e40-bea8858393c8" pod="tigera-operator/tigera-operator-747864d56d-t6rtn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.390067 kubelet[2774]: I0813 01:14:30.390009 2774 kubelet.go:2351] "Pod admission denied" podUID="96e8894b-b2c7-46bb-9acf-207ed70c78cc" pod="tigera-operator/tigera-operator-747864d56d-j4kcs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.485956 kubelet[2774]: I0813 01:14:30.485535 2774 kubelet.go:2351] "Pod admission denied" podUID="7b834e9d-4f58-4391-acc9-abc0f46606ec" pod="tigera-operator/tigera-operator-747864d56d-zl7rm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.504798 containerd[1543]: time="2025-08-13T01:14:30.504710429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:30.587383 containerd[1543]: time="2025-08-13T01:14:30.584723223Z" level=error msg="Failed to destroy network for sandbox \"bfaed5114cc4b59e0ad8939a33e69cdd8d32babaee214236a71fc523eac99816\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:30.589213 containerd[1543]: time="2025-08-13T01:14:30.588770084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfaed5114cc4b59e0ad8939a33e69cdd8d32babaee214236a71fc523eac99816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:30.589691 systemd[1]: run-netns-cni\x2d89028988\x2dfd6c\x2de8ee\x2daad1\x2dbe95c9f40fff.mount: Deactivated successfully. Aug 13 01:14:30.591044 kubelet[2774]: E0813 01:14:30.590592 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfaed5114cc4b59e0ad8939a33e69cdd8d32babaee214236a71fc523eac99816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:30.591044 kubelet[2774]: E0813 01:14:30.590709 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfaed5114cc4b59e0ad8939a33e69cdd8d32babaee214236a71fc523eac99816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:30.591044 kubelet[2774]: E0813 01:14:30.590748 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfaed5114cc4b59e0ad8939a33e69cdd8d32babaee214236a71fc523eac99816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:30.591044 kubelet[2774]: E0813 01:14:30.590795 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfaed5114cc4b59e0ad8939a33e69cdd8d32babaee214236a71fc523eac99816\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:14:30.601344 kubelet[2774]: I0813 01:14:30.601254 2774 kubelet.go:2351] "Pod admission denied" podUID="913baaf1-c73d-4093-a9a3-2be631e9e890" pod="tigera-operator/tigera-operator-747864d56d-sxsqg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.666966 update_engine[1517]: I20250813 01:14:30.666832 1517 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:14:30.667723 update_engine[1517]: I20250813 01:14:30.667541 1517 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:14:30.668287 update_engine[1517]: I20250813 01:14:30.668120 1517 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:14:30.669009 update_engine[1517]: E20250813 01:14:30.668964 1517 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:14:30.669394 update_engine[1517]: I20250813 01:14:30.669261 1517 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 01:14:30.688848 kubelet[2774]: I0813 01:14:30.688782 2774 kubelet.go:2351] "Pod admission denied" podUID="39f0a1aa-2ae4-4794-8cc5-963c790fcf8a" pod="tigera-operator/tigera-operator-747864d56d-dbgsx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.791531 kubelet[2774]: I0813 01:14:30.791340 2774 kubelet.go:2351] "Pod admission denied" podUID="ab27e650-cf61-4fb8-8adc-71643a96ba31" pod="tigera-operator/tigera-operator-747864d56d-22nfc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.895479 kubelet[2774]: I0813 01:14:30.895330 2774 kubelet.go:2351] "Pod admission denied" podUID="8cb1980b-7992-4ebf-9eb1-471baa3c0f9a" pod="tigera-operator/tigera-operator-747864d56d-m57t6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.986864 kubelet[2774]: I0813 01:14:30.986811 2774 kubelet.go:2351] "Pod admission denied" podUID="fa7d1519-5561-45ec-a3e1-a15f273b7a36" pod="tigera-operator/tigera-operator-747864d56d-smww7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.094498 kubelet[2774]: I0813 01:14:31.093421 2774 kubelet.go:2351] "Pod admission denied" podUID="c857dc2e-3a70-4909-a2ab-ec634933d4b9" pod="tigera-operator/tigera-operator-747864d56d-zxgxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.190298 kubelet[2774]: I0813 01:14:31.190011 2774 kubelet.go:2351] "Pod admission denied" podUID="b0bf8562-dd5e-4041-b651-ba453ac35d96" pod="tigera-operator/tigera-operator-747864d56d-wnmg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.286477 kubelet[2774]: I0813 01:14:31.286419 2774 kubelet.go:2351] "Pod admission denied" podUID="8664cf32-7e93-4c67-aab8-9a60656380c5" pod="tigera-operator/tigera-operator-747864d56d-fh2xx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.387239 kubelet[2774]: I0813 01:14:31.386908 2774 kubelet.go:2351] "Pod admission denied" podUID="e25ea3b0-3bf7-4c64-af02-beb8a95a7bef" pod="tigera-operator/tigera-operator-747864d56d-6jvn9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.487653 kubelet[2774]: I0813 01:14:31.487589 2774 kubelet.go:2351] "Pod admission denied" podUID="a11590b7-83c9-4c23-8de3-8c822945a199" pod="tigera-operator/tigera-operator-747864d56d-f7cm8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.504283 kubelet[2774]: E0813 01:14:31.504151 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:31.505460 containerd[1543]: time="2025-08-13T01:14:31.505405379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:31.571197 containerd[1543]: time="2025-08-13T01:14:31.571122737Z" level=error msg="Failed to destroy network for sandbox \"8649d24ac24fa45cfdee67a6fcd70517881200f03b5681f3d2996581abf67ee6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:31.575616 containerd[1543]: time="2025-08-13T01:14:31.575380400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8649d24ac24fa45cfdee67a6fcd70517881200f03b5681f3d2996581abf67ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:31.576499 kubelet[2774]: E0813 01:14:31.575885 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8649d24ac24fa45cfdee67a6fcd70517881200f03b5681f3d2996581abf67ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:31.576499 kubelet[2774]: E0813 01:14:31.575934 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8649d24ac24fa45cfdee67a6fcd70517881200f03b5681f3d2996581abf67ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:31.576499 kubelet[2774]: E0813 01:14:31.575959 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8649d24ac24fa45cfdee67a6fcd70517881200f03b5681f3d2996581abf67ee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:31.576499 kubelet[2774]: E0813 01:14:31.576002 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8649d24ac24fa45cfdee67a6fcd70517881200f03b5681f3d2996581abf67ee6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:14:31.576799 systemd[1]: run-netns-cni\x2da0be99be\x2d5256\x2d00eb\x2d1e53\x2d8c90e0d2b992.mount: Deactivated successfully. Aug 13 01:14:31.589889 kubelet[2774]: I0813 01:14:31.589846 2774 kubelet.go:2351] "Pod admission denied" podUID="ca563412-fcd5-43b8-bf93-0226c1fa753d" pod="tigera-operator/tigera-operator-747864d56d-pj9mn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.689066 kubelet[2774]: I0813 01:14:31.689005 2774 kubelet.go:2351] "Pod admission denied" podUID="66657b8f-a0bd-4f7c-b2b0-d0159d0bb413" pod="tigera-operator/tigera-operator-747864d56d-lldvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.792344 kubelet[2774]: I0813 01:14:31.791949 2774 kubelet.go:2351] "Pod admission denied" podUID="03095cf2-90a8-458b-8008-0cef378ad42d" pod="tigera-operator/tigera-operator-747864d56d-vmj5w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.839239 kubelet[2774]: I0813 01:14:31.838850 2774 kubelet.go:2351] "Pod admission denied" podUID="f831797b-f833-4aee-b447-0586b667ca76" pod="tigera-operator/tigera-operator-747864d56d-ztckc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.939888 kubelet[2774]: I0813 01:14:31.939395 2774 kubelet.go:2351] "Pod admission denied" podUID="d642ea4a-f0ba-4d7d-b1b9-ee081bcda041" pod="tigera-operator/tigera-operator-747864d56d-9prd4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.139336 kubelet[2774]: I0813 01:14:32.139248 2774 kubelet.go:2351] "Pod admission denied" podUID="057ad3d1-a10b-474a-a3c2-8969d6fdef5b" pod="tigera-operator/tigera-operator-747864d56d-jk65h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.238150 kubelet[2774]: I0813 01:14:32.237835 2774 kubelet.go:2351] "Pod admission denied" podUID="b2a6214d-1374-41ed-8b21-b6287348c3c3" pod="tigera-operator/tigera-operator-747864d56d-vmc2r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.339606 kubelet[2774]: I0813 01:14:32.339552 2774 kubelet.go:2351] "Pod admission denied" podUID="68e58945-7c41-4be4-95cf-0cd119de999a" pod="tigera-operator/tigera-operator-747864d56d-nh29l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.392462 kubelet[2774]: I0813 01:14:32.392411 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:14:32.393410 kubelet[2774]: I0813 01:14:32.392504 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:14:32.394863 containerd[1543]: time="2025-08-13T01:14:32.394798121Z" level=info msg="StopPodSandbox for \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\"" Aug 13 01:14:32.395015 containerd[1543]: time="2025-08-13T01:14:32.394959224Z" level=info msg="TearDown network for sandbox \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" successfully" Aug 13 01:14:32.395015 containerd[1543]: time="2025-08-13T01:14:32.394981384Z" level=info msg="StopPodSandbox for \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" returns successfully" Aug 13 01:14:32.396701 containerd[1543]: time="2025-08-13T01:14:32.396628698Z" level=info msg="RemovePodSandbox for \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\"" Aug 13 01:14:32.396701 containerd[1543]: time="2025-08-13T01:14:32.396703549Z" level=info msg="Forcibly stopping sandbox \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\"" Aug 13 01:14:32.397161 containerd[1543]: time="2025-08-13T01:14:32.396868911Z" level=info msg="TearDown network for sandbox \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" successfully" Aug 13 01:14:32.399035 containerd[1543]: time="2025-08-13T01:14:32.398997512Z" level=info msg="Ensure that sandbox 402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77 in task-service has been cleanup successfully" Aug 13 01:14:32.401487 containerd[1543]: time="2025-08-13T01:14:32.401453808Z" level=info msg="RemovePodSandbox \"402bb977e58eb67d453f2a6cd3a6ce5f3883cb4f4dfc5d75ff8c0e3017ad6d77\" returns successfully" Aug 13 01:14:32.402316 kubelet[2774]: I0813 01:14:32.402291 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:14:32.415785 kubelet[2774]: I0813 01:14:32.415737 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:14:32.416145 kubelet[2774]: I0813 01:14:32.415819 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-c5f875d88-fvdcx","kube-system/coredns-668d6bf9bc-wp8jf","kube-system/coredns-668d6bf9bc-qnwfr","calico-system/csi-node-driver-bc7dg","calico-system/calico-node-9bc9j","calico-system/calico-typha-bf7d6589c-n48dd","kube-system/kube-controller-manager-172-234-199-8","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8"] Aug 13 01:14:32.416145 kubelet[2774]: E0813 01:14:32.415855 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:32.416145 kubelet[2774]: E0813 01:14:32.415867 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:32.416145 kubelet[2774]: E0813 01:14:32.415874 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:32.416145 kubelet[2774]: E0813 01:14:32.415882 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:32.416145 kubelet[2774]: E0813 01:14:32.415889 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-9bc9j" Aug 13 01:14:32.416145 kubelet[2774]: E0813 01:14:32.415899 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf7d6589c-n48dd" Aug 13 01:14:32.416145 kubelet[2774]: E0813 01:14:32.416098 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:14:32.416145 kubelet[2774]: E0813 01:14:32.416110 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8p8zx" Aug 13 01:14:32.416145 kubelet[2774]: E0813 01:14:32.416126 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:14:32.416145 kubelet[2774]: E0813 01:14:32.416137 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:14:32.416145 kubelet[2774]: I0813 01:14:32.416154 2774 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:14:32.541467 kubelet[2774]: I0813 01:14:32.540307 2774 kubelet.go:2351] "Pod admission denied" podUID="e4aeeecb-cfb9-44a4-ae1f-4c3437b0cbd0" pod="tigera-operator/tigera-operator-747864d56d-68gf9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.638813 kubelet[2774]: I0813 01:14:32.638742 2774 kubelet.go:2351] "Pod admission denied" podUID="a859bf7e-dab9-4db8-9541-44f8473800f6" pod="tigera-operator/tigera-operator-747864d56d-jgtfq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.738862 kubelet[2774]: I0813 01:14:32.738341 2774 kubelet.go:2351] "Pod admission denied" podUID="41c88467-2ae9-4b36-bb58-140bfe3a33ba" pod="tigera-operator/tigera-operator-747864d56d-t8g5h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.841906 kubelet[2774]: I0813 01:14:32.841468 2774 kubelet.go:2351] "Pod admission denied" podUID="e85acc69-7037-4e8d-b31b-9abe2f0b8516" pod="tigera-operator/tigera-operator-747864d56d-w9984" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.944293 kubelet[2774]: I0813 01:14:32.943789 2774 kubelet.go:2351] "Pod admission denied" podUID="de34c748-d6fe-4e62-a070-272cfdf74af0" pod="tigera-operator/tigera-operator-747864d56d-mcv7b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.040789 kubelet[2774]: I0813 01:14:33.040708 2774 kubelet.go:2351] "Pod admission denied" podUID="03845d60-e417-41f6-9493-6329b89da2b5" pod="tigera-operator/tigera-operator-747864d56d-vrmfs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.139906 kubelet[2774]: I0813 01:14:33.139737 2774 kubelet.go:2351] "Pod admission denied" podUID="76f0f650-31d9-4efb-81b8-f3edfa7512ff" pod="tigera-operator/tigera-operator-747864d56d-2lg9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.239345 kubelet[2774]: I0813 01:14:33.239281 2774 kubelet.go:2351] "Pod admission denied" podUID="5b4fedd7-7a42-4637-ad4b-762405e41a8a" pod="tigera-operator/tigera-operator-747864d56d-9b8jl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.337946 kubelet[2774]: I0813 01:14:33.337887 2774 kubelet.go:2351] "Pod admission denied" podUID="2d1f7fee-fee0-4fd6-a520-4f7605ccc98c" pod="tigera-operator/tigera-operator-747864d56d-86k4s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.439119 kubelet[2774]: I0813 01:14:33.438882 2774 kubelet.go:2351] "Pod admission denied" podUID="051d7f4a-11f8-46c4-87b8-2d76267b1463" pod="tigera-operator/tigera-operator-747864d56d-4868d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.488749 kubelet[2774]: I0813 01:14:33.488662 2774 kubelet.go:2351] "Pod admission denied" podUID="439195f6-68cc-4f15-8f36-c4a41e37253a" pod="tigera-operator/tigera-operator-747864d56d-qb2pc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.589500 kubelet[2774]: I0813 01:14:33.589435 2774 kubelet.go:2351] "Pod admission denied" podUID="8b604772-7efa-4877-871f-e091c65b864e" pod="tigera-operator/tigera-operator-747864d56d-68gr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.688269 kubelet[2774]: I0813 01:14:33.688018 2774 kubelet.go:2351] "Pod admission denied" podUID="1511b43c-b204-4062-b5af-1b042c5d333c" pod="tigera-operator/tigera-operator-747864d56d-xxlkl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.786865 kubelet[2774]: I0813 01:14:33.786711 2774 kubelet.go:2351] "Pod admission denied" podUID="4e80ccb5-6cd2-4a74-8306-1dc4d6d6ea89" pod="tigera-operator/tigera-operator-747864d56d-9q6k4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.987326 kubelet[2774]: I0813 01:14:33.986946 2774 kubelet.go:2351] "Pod admission denied" podUID="5eaf85e0-fcfc-48dc-b7e9-851992432833" pod="tigera-operator/tigera-operator-747864d56d-lzl2d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.088063 kubelet[2774]: I0813 01:14:34.087854 2774 kubelet.go:2351] "Pod admission denied" podUID="28755be2-5b79-4cd5-bc70-62c45b8ea10e" pod="tigera-operator/tigera-operator-747864d56d-qn4kq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.189522 kubelet[2774]: I0813 01:14:34.189461 2774 kubelet.go:2351] "Pod admission denied" podUID="99278c6b-79b4-4fd0-ac41-a1e67467a1eb" pod="tigera-operator/tigera-operator-747864d56d-v255m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.394671 kubelet[2774]: I0813 01:14:34.394303 2774 kubelet.go:2351] "Pod admission denied" podUID="fbb7fb76-c6d1-4f6c-be4c-b4e6ae120238" pod="tigera-operator/tigera-operator-747864d56d-gc7n5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.490996 kubelet[2774]: I0813 01:14:34.490844 2774 kubelet.go:2351] "Pod admission denied" podUID="30d0cb81-e2f9-4b99-a763-766ae4bcc0ed" pod="tigera-operator/tigera-operator-747864d56d-h2496" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.507897 kubelet[2774]: E0813 01:14:34.507806 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount929401159: write /var/lib/containerd/tmpmounts/containerd-mount929401159/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-9bc9j" podUID="0909eaba-89ba-4b02-b2f3-a17e3b6e2afc" Aug 13 01:14:34.588458 kubelet[2774]: I0813 01:14:34.588398 2774 kubelet.go:2351] "Pod admission denied" podUID="a040ba4c-7c8f-478d-8461-5f8bef26dfbb" pod="tigera-operator/tigera-operator-747864d56d-xsjkl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.792374 kubelet[2774]: I0813 01:14:34.791695 2774 kubelet.go:2351] "Pod admission denied" podUID="9160584c-9bd3-4758-aa46-48f3fd88c653" pod="tigera-operator/tigera-operator-747864d56d-x8w6h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.886175 kubelet[2774]: I0813 01:14:34.886121 2774 kubelet.go:2351] "Pod admission denied" podUID="7b5148df-49ff-44ab-8bcb-8784b2ddfbc8" pod="tigera-operator/tigera-operator-747864d56d-gpfcr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.989236 kubelet[2774]: I0813 01:14:34.988981 2774 kubelet.go:2351] "Pod admission denied" podUID="ce1ff34d-ddd4-4447-a4fd-df8e8d7daba4" pod="tigera-operator/tigera-operator-747864d56d-mxvkd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.193585 kubelet[2774]: I0813 01:14:35.193510 2774 kubelet.go:2351] "Pod admission denied" podUID="bc0f5786-29df-4044-9669-d1079366a3e0" pod="tigera-operator/tigera-operator-747864d56d-dxvf8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.288113 kubelet[2774]: I0813 01:14:35.287655 2774 kubelet.go:2351] "Pod admission denied" podUID="76f3b88a-809c-48c8-a5f9-ec875523d655" pod="tigera-operator/tigera-operator-747864d56d-x8tbw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.387439 kubelet[2774]: I0813 01:14:35.387388 2774 kubelet.go:2351] "Pod admission denied" podUID="ae9346eb-6fe8-4d54-8e97-94e3188bb5bf" pod="tigera-operator/tigera-operator-747864d56d-mqtf8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.489609 kubelet[2774]: I0813 01:14:35.488321 2774 kubelet.go:2351] "Pod admission denied" podUID="730d5d4f-956e-4adb-b46b-7aeef975a3e7" pod="tigera-operator/tigera-operator-747864d56d-tkfbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.599678 kubelet[2774]: I0813 01:14:35.599380 2774 kubelet.go:2351] "Pod admission denied" podUID="eff12edf-a534-490b-a8ab-d2b2285e3159" pod="tigera-operator/tigera-operator-747864d56d-xwdp2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.687845 kubelet[2774]: I0813 01:14:35.687707 2774 kubelet.go:2351] "Pod admission denied" podUID="8cd6b5b7-6004-41eb-9bd3-573cbb565563" pod="tigera-operator/tigera-operator-747864d56d-gbd89" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.787411 kubelet[2774]: I0813 01:14:35.787021 2774 kubelet.go:2351] "Pod admission denied" podUID="a52f44c8-e981-4726-8efe-c96e3c6e1fc2" pod="tigera-operator/tigera-operator-747864d56d-wrgmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.989702 kubelet[2774]: I0813 01:14:35.989615 2774 kubelet.go:2351] "Pod admission denied" podUID="c69d53da-4cd0-47c5-ac88-fec63cd99ab4" pod="tigera-operator/tigera-operator-747864d56d-rv5df" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.091591 kubelet[2774]: I0813 01:14:36.091137 2774 kubelet.go:2351] "Pod admission denied" podUID="a578fec8-91be-47c7-918f-981c41cb677a" pod="tigera-operator/tigera-operator-747864d56d-cqqhm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.188533 kubelet[2774]: I0813 01:14:36.188465 2774 kubelet.go:2351] "Pod admission denied" podUID="2bd70282-d88d-4e0e-b7a9-25a2ca34e96a" pod="tigera-operator/tigera-operator-747864d56d-skh2z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.288536 kubelet[2774]: I0813 01:14:36.288486 2774 kubelet.go:2351] "Pod admission denied" podUID="2d4e91b1-76f4-4b8e-9d0c-bc78244c49c7" pod="tigera-operator/tigera-operator-747864d56d-4bbjw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.390476 kubelet[2774]: I0813 01:14:36.390268 2774 kubelet.go:2351] "Pod admission denied" podUID="1519588d-7730-48ca-b058-f4c139336dd1" pod="tigera-operator/tigera-operator-747864d56d-k575h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.489031 kubelet[2774]: I0813 01:14:36.488941 2774 kubelet.go:2351] "Pod admission denied" podUID="f9091815-4dc1-46a8-9b55-991f8f5d678e" pod="tigera-operator/tigera-operator-747864d56d-7mzlp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.590785 kubelet[2774]: I0813 01:14:36.590717 2774 kubelet.go:2351] "Pod admission denied" podUID="add84eff-335e-4bf5-a41c-9010714e68d4" pod="tigera-operator/tigera-operator-747864d56d-7t5dm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.691956 kubelet[2774]: I0813 01:14:36.691885 2774 kubelet.go:2351] "Pod admission denied" podUID="ad834f56-f9e0-4b1a-a232-8c3583447e08" pod="tigera-operator/tigera-operator-747864d56d-jg5cv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.792731 kubelet[2774]: I0813 01:14:36.792656 2774 kubelet.go:2351] "Pod admission denied" podUID="449c6986-ace3-4953-83a4-f7f0763b3584" pod="tigera-operator/tigera-operator-747864d56d-bf42b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.892556 kubelet[2774]: I0813 01:14:36.892487 2774 kubelet.go:2351] "Pod admission denied" podUID="bbdf5660-3053-4f08-8b01-6421c1da7f38" pod="tigera-operator/tigera-operator-747864d56d-bnb9q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.991390 kubelet[2774]: I0813 01:14:36.989746 2774 kubelet.go:2351] "Pod admission denied" podUID="aa2a372c-2522-43fa-a959-f54eff5cfbd1" pod="tigera-operator/tigera-operator-747864d56d-ghpdc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.090448 kubelet[2774]: I0813 01:14:37.090378 2774 kubelet.go:2351] "Pod admission denied" podUID="b7fbdc75-07cd-43dc-a067-f31315d103af" pod="tigera-operator/tigera-operator-747864d56d-n6tkj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.186565 kubelet[2774]: I0813 01:14:37.186498 2774 kubelet.go:2351] "Pod admission denied" podUID="a81ba671-09dc-450d-9e91-086cbec0294b" pod="tigera-operator/tigera-operator-747864d56d-txd9s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.288870 kubelet[2774]: I0813 01:14:37.288456 2774 kubelet.go:2351] "Pod admission denied" podUID="ada8300f-0c89-43cf-9f43-5c712a794672" pod="tigera-operator/tigera-operator-747864d56d-bkxl5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.387973 kubelet[2774]: I0813 01:14:37.387893 2774 kubelet.go:2351] "Pod admission denied" podUID="a127f83e-a276-43eb-bdd0-7e67f4c6d496" pod="tigera-operator/tigera-operator-747864d56d-l9qxg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.590408 kubelet[2774]: I0813 01:14:37.588659 2774 kubelet.go:2351] "Pod admission denied" podUID="14bdb3d7-9bbe-41ef-9021-840bddb2cd39" pod="tigera-operator/tigera-operator-747864d56d-8d9xv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.705404 kubelet[2774]: I0813 01:14:37.703396 2774 kubelet.go:2351] "Pod admission denied" podUID="d17eaa8f-7da0-43ec-8c92-fbf3959dea1d" pod="tigera-operator/tigera-operator-747864d56d-pn8v8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.791914 kubelet[2774]: I0813 01:14:37.791826 2774 kubelet.go:2351] "Pod admission denied" podUID="b9660fa8-739f-43ea-aaa2-bba05f2310f7" pod="tigera-operator/tigera-operator-747864d56d-8hbkz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.890853 kubelet[2774]: I0813 01:14:37.890289 2774 kubelet.go:2351] "Pod admission denied" podUID="5d7ecfba-1ca1-4726-bc45-5dcb9edf56b2" pod="tigera-operator/tigera-operator-747864d56d-pkmtl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.990626 kubelet[2774]: I0813 01:14:37.990556 2774 kubelet.go:2351] "Pod admission denied" podUID="1a3fffd8-d59e-4cfa-a986-7968c34f99ed" pod="tigera-operator/tigera-operator-747864d56d-jgzmr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.191369 kubelet[2774]: I0813 01:14:38.191256 2774 kubelet.go:2351] "Pod admission denied" podUID="4bd30dc1-865f-4ae7-a6a9-661e7e4a7eb0" pod="tigera-operator/tigera-operator-747864d56d-hzlp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.289320 kubelet[2774]: I0813 01:14:38.289248 2774 kubelet.go:2351] "Pod admission denied" podUID="4b5d84b9-b875-43be-9428-c6b9c28896fd" pod="tigera-operator/tigera-operator-747864d56d-smn47" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.343381 kubelet[2774]: I0813 01:14:38.342393 2774 kubelet.go:2351] "Pod admission denied" podUID="959f0373-93e1-4167-a669-b7fa08e45f98" pod="tigera-operator/tigera-operator-747864d56d-4rhht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.439571 kubelet[2774]: I0813 01:14:38.439510 2774 kubelet.go:2351] "Pod admission denied" podUID="46c67588-2514-423d-8964-2bfc6377621d" pod="tigera-operator/tigera-operator-747864d56d-8frzm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.537439 kubelet[2774]: I0813 01:14:38.537255 2774 kubelet.go:2351] "Pod admission denied" podUID="8a06e24e-4953-4567-a78f-9a2e58d78241" pod="tigera-operator/tigera-operator-747864d56d-rf6j8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.638174 kubelet[2774]: I0813 01:14:38.638104 2774 kubelet.go:2351] "Pod admission denied" podUID="4a061770-de34-4cef-9b12-157fed37a5b3" pod="tigera-operator/tigera-operator-747864d56d-l45ps" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.843547 kubelet[2774]: I0813 01:14:38.843342 2774 kubelet.go:2351] "Pod admission denied" podUID="c7421831-b392-4666-8127-bd1dd6526322" pod="tigera-operator/tigera-operator-747864d56d-4x5sx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.943336 kubelet[2774]: I0813 01:14:38.942790 2774 kubelet.go:2351] "Pod admission denied" podUID="2f0a3ac8-4662-4a64-a2cd-9085f751d6d9" pod="tigera-operator/tigera-operator-747864d56d-5qpc4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.040576 kubelet[2774]: I0813 01:14:39.040504 2774 kubelet.go:2351] "Pod admission denied" podUID="85c7d7e5-f996-4f72-9093-b717f79f5676" pod="tigera-operator/tigera-operator-747864d56d-c89kw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.140567 kubelet[2774]: I0813 01:14:39.140391 2774 kubelet.go:2351] "Pod admission denied" podUID="7e5d7b85-5ea9-461a-a1fe-945484678d57" pod="tigera-operator/tigera-operator-747864d56d-wtvgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.240511 kubelet[2774]: I0813 01:14:39.240438 2774 kubelet.go:2351] "Pod admission denied" podUID="b7ef57f3-0a36-493e-b802-37d9f965cc82" pod="tigera-operator/tigera-operator-747864d56d-qvkv9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.339961 kubelet[2774]: I0813 01:14:39.339887 2774 kubelet.go:2351] "Pod admission denied" podUID="84e8c820-8bc3-403f-a3f5-94e1268db685" pod="tigera-operator/tigera-operator-747864d56d-jrkfk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.442829 kubelet[2774]: I0813 01:14:39.442767 2774 kubelet.go:2351] "Pod admission denied" podUID="51d40d0a-76f7-4822-9ec7-60d4906d20d3" pod="tigera-operator/tigera-operator-747864d56d-q2bhq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.504034 kubelet[2774]: E0813 01:14:39.503986 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:39.504902 containerd[1543]: time="2025-08-13T01:14:39.504863220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:39.555539 kubelet[2774]: I0813 01:14:39.555461 2774 kubelet.go:2351] "Pod admission denied" podUID="3b0e2f75-2b0b-433c-b472-481d7b7391b5" pod="tigera-operator/tigera-operator-747864d56d-bb5fw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.602681 containerd[1543]: time="2025-08-13T01:14:39.602590014Z" level=error msg="Failed to destroy network for sandbox \"9f1142943ce92f5f52dc4003a0aab28810cac616035726dfcd9632328bfe31ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:39.606528 containerd[1543]: time="2025-08-13T01:14:39.606399502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f1142943ce92f5f52dc4003a0aab28810cac616035726dfcd9632328bfe31ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:39.606690 systemd[1]: run-netns-cni\x2d00f007bf\x2d9082\x2d30df\x2d8e6c\x2d9e7ecf77ed9d.mount: Deactivated successfully. Aug 13 01:14:39.606957 kubelet[2774]: E0813 01:14:39.606811 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f1142943ce92f5f52dc4003a0aab28810cac616035726dfcd9632328bfe31ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:39.606957 kubelet[2774]: E0813 01:14:39.606887 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f1142943ce92f5f52dc4003a0aab28810cac616035726dfcd9632328bfe31ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:39.606957 kubelet[2774]: E0813 01:14:39.606925 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f1142943ce92f5f52dc4003a0aab28810cac616035726dfcd9632328bfe31ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:39.607132 kubelet[2774]: E0813 01:14:39.606979 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f1142943ce92f5f52dc4003a0aab28810cac616035726dfcd9632328bfe31ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wp8jf" podUID="56ce87f4-747f-470b-8388-a8400bdda009" Aug 13 01:14:39.639390 kubelet[2774]: I0813 01:14:39.639312 2774 kubelet.go:2351] "Pod admission denied" podUID="f0ad33a3-e373-4a7f-8c16-527720ed6df3" pod="tigera-operator/tigera-operator-747864d56d-hsr7s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.739495 kubelet[2774]: I0813 01:14:39.739316 2774 kubelet.go:2351] "Pod admission denied" podUID="c772fe54-b5d9-42ee-a1a9-227a29efe0e0" pod="tigera-operator/tigera-operator-747864d56d-ck68k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.789062 kubelet[2774]: I0813 01:14:39.788990 2774 kubelet.go:2351] "Pod admission denied" podUID="be69130e-61c4-468f-a19d-c58f6799c469" pod="tigera-operator/tigera-operator-747864d56d-kj8rg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.889978 kubelet[2774]: I0813 01:14:39.889876 2774 kubelet.go:2351] "Pod admission denied" podUID="7cdf21be-a9e6-4cf2-947f-00b5357dc774" pod="tigera-operator/tigera-operator-747864d56d-ndrxr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.090063 kubelet[2774]: I0813 01:14:40.089868 2774 kubelet.go:2351] "Pod admission denied" podUID="1a364cf5-c7a5-46b5-ba8e-5578e70b32c1" pod="tigera-operator/tigera-operator-747864d56d-hxbcf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.192644 kubelet[2774]: I0813 01:14:40.192570 2774 kubelet.go:2351] "Pod admission denied" podUID="5a43b55e-d1f0-4185-a15b-2ea8f19d7074" pod="tigera-operator/tigera-operator-747864d56d-z2z4w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.287654 kubelet[2774]: I0813 01:14:40.287580 2774 kubelet.go:2351] "Pod admission denied" podUID="6990c0c6-048d-4022-9073-aa17b7ef89f8" pod="tigera-operator/tigera-operator-747864d56d-kcpsz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.390398 kubelet[2774]: I0813 01:14:40.390211 2774 kubelet.go:2351] "Pod admission denied" podUID="12981f1f-9756-4ff6-89d5-08d7224cb4a7" pod="tigera-operator/tigera-operator-747864d56d-wrnhm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.489274 kubelet[2774]: I0813 01:14:40.489209 2774 kubelet.go:2351] "Pod admission denied" podUID="2829ae99-246c-4b0a-bdec-9b07e6aec923" pod="tigera-operator/tigera-operator-747864d56d-vblq5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.671392 update_engine[1517]: I20250813 01:14:40.671128 1517 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:14:40.672384 update_engine[1517]: I20250813 01:14:40.672078 1517 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:14:40.672730 update_engine[1517]: I20250813 01:14:40.672706 1517 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:14:40.673496 update_engine[1517]: E20250813 01:14:40.673373 1517 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:14:40.673496 update_engine[1517]: I20250813 01:14:40.673468 1517 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 01:14:40.689708 kubelet[2774]: I0813 01:14:40.689629 2774 kubelet.go:2351] "Pod admission denied" podUID="5c14a142-7fe1-4803-ad07-760ef29e15dd" pod="tigera-operator/tigera-operator-747864d56d-dp2tn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.802200 kubelet[2774]: I0813 01:14:40.802081 2774 kubelet.go:2351] "Pod admission denied" podUID="e3170798-c590-4116-b21c-e66c5ed9df1d" pod="tigera-operator/tigera-operator-747864d56d-gkdwc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.890090 kubelet[2774]: I0813 01:14:40.890033 2774 kubelet.go:2351] "Pod admission denied" podUID="4a47d31e-90d2-4904-b72f-ba3629024020" pod="tigera-operator/tigera-operator-747864d56d-s49qc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.988948 kubelet[2774]: I0813 01:14:40.988691 2774 kubelet.go:2351] "Pod admission denied" podUID="b9763a0d-78e4-4481-bd56-15316bd84bd0" pod="tigera-operator/tigera-operator-747864d56d-sstjk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.089601 kubelet[2774]: I0813 01:14:41.089533 2774 kubelet.go:2351] "Pod admission denied" podUID="51e1213c-4257-45d9-b2a1-c6c03f14d123" pod="tigera-operator/tigera-operator-747864d56d-stvht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.189718 kubelet[2774]: I0813 01:14:41.189623 2774 kubelet.go:2351] "Pod admission denied" podUID="337a8368-ce22-467b-b509-862c1a2549f8" pod="tigera-operator/tigera-operator-747864d56d-hnfg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.292470 kubelet[2774]: I0813 01:14:41.292257 2774 kubelet.go:2351] "Pod admission denied" podUID="b920d617-dace-488a-aafe-657d406debc0" pod="tigera-operator/tigera-operator-747864d56d-hrlvg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.392163 kubelet[2774]: I0813 01:14:41.392043 2774 kubelet.go:2351] "Pod admission denied" podUID="504e1737-47bd-4510-a641-cfe995ff9a8e" pod="tigera-operator/tigera-operator-747864d56d-4szjs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.441958 kubelet[2774]: I0813 01:14:41.441879 2774 kubelet.go:2351] "Pod admission denied" podUID="d19b40a1-f321-48b3-9e0b-1abb1fa20fbb" pod="tigera-operator/tigera-operator-747864d56d-ll47b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.541004 kubelet[2774]: I0813 01:14:41.540941 2774 kubelet.go:2351] "Pod admission denied" podUID="34859b5c-f7bc-4fc8-b8e8-1dcd84f62756" pod="tigera-operator/tigera-operator-747864d56d-shnmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.640117 kubelet[2774]: I0813 01:14:41.639917 2774 kubelet.go:2351] "Pod admission denied" podUID="c6c1186a-3f0d-4f82-a198-054f8560de6e" pod="tigera-operator/tigera-operator-747864d56d-hljql" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.747736 kubelet[2774]: I0813 01:14:41.747679 2774 kubelet.go:2351] "Pod admission denied" podUID="7f01eb41-7d14-4fd8-a063-81fdea64880c" pod="tigera-operator/tigera-operator-747864d56d-49kdw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.949057 kubelet[2774]: I0813 01:14:41.948909 2774 kubelet.go:2351] "Pod admission denied" podUID="2b5c9931-7ad8-46f0-9180-c4f47e9225b4" pod="tigera-operator/tigera-operator-747864d56d-84hmk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.040131 kubelet[2774]: I0813 01:14:42.040076 2774 kubelet.go:2351] "Pod admission denied" podUID="147b0c8a-3cf5-4240-8a36-b48894fcf646" pod="tigera-operator/tigera-operator-747864d56d-pck4w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.140276 kubelet[2774]: I0813 01:14:42.140212 2774 kubelet.go:2351] "Pod admission denied" podUID="cc0660d8-8683-4b36-8a8d-33f662fd866c" pod="tigera-operator/tigera-operator-747864d56d-l4m9m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.245963 kubelet[2774]: I0813 01:14:42.245751 2774 kubelet.go:2351] "Pod admission denied" podUID="da157a13-17c6-42ac-9454-47933ef051cc" pod="tigera-operator/tigera-operator-747864d56d-2ccc2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.339947 kubelet[2774]: I0813 01:14:42.339878 2774 kubelet.go:2351] "Pod admission denied" podUID="9cfc49b8-4177-44ae-93fc-d4a79c4d8502" pod="tigera-operator/tigera-operator-747864d56d-hqsnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.446380 kubelet[2774]: I0813 01:14:42.446128 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:14:42.446380 kubelet[2774]: I0813 01:14:42.446180 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:14:42.455032 kubelet[2774]: I0813 01:14:42.454337 2774 kubelet.go:2351] "Pod admission denied" podUID="289b03ac-edef-4e49-81a7-7a5f7f9f26cd" pod="tigera-operator/tigera-operator-747864d56d-dgjj4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.457210 kubelet[2774]: I0813 01:14:42.457177 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:14:42.488202 kubelet[2774]: I0813 01:14:42.488160 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:14:42.488451 kubelet[2774]: I0813 01:14:42.488251 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-c5f875d88-fvdcx","kube-system/coredns-668d6bf9bc-wp8jf","kube-system/coredns-668d6bf9bc-qnwfr","calico-system/csi-node-driver-bc7dg","calico-system/calico-node-9bc9j","calico-system/calico-typha-bf7d6589c-n48dd","kube-system/kube-controller-manager-172-234-199-8","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8"] Aug 13 01:14:42.488451 kubelet[2774]: E0813 01:14:42.488284 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:42.488451 kubelet[2774]: E0813 01:14:42.488296 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:42.488451 kubelet[2774]: E0813 01:14:42.488303 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:42.488451 kubelet[2774]: E0813 01:14:42.488310 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:42.488451 kubelet[2774]: E0813 01:14:42.488316 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-9bc9j" Aug 13 01:14:42.488451 kubelet[2774]: E0813 01:14:42.488328 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf7d6589c-n48dd" Aug 13 01:14:42.488451 kubelet[2774]: E0813 01:14:42.488338 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:14:42.489387 kubelet[2774]: E0813 01:14:42.489367 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8p8zx" Aug 13 01:14:42.489387 kubelet[2774]: E0813 01:14:42.489390 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:14:42.489788 kubelet[2774]: E0813 01:14:42.489400 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:14:42.489788 kubelet[2774]: I0813 01:14:42.489410 2774 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:14:42.543186 kubelet[2774]: I0813 01:14:42.543014 2774 kubelet.go:2351] "Pod admission denied" podUID="18251675-e365-4b1c-848a-c687557a666e" pod="tigera-operator/tigera-operator-747864d56d-mjlt7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.641296 kubelet[2774]: I0813 01:14:42.641226 2774 kubelet.go:2351] "Pod admission denied" podUID="f1460f76-3f51-4eea-8a06-b549d0bd0478" pod="tigera-operator/tigera-operator-747864d56d-mjz7n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.740842 kubelet[2774]: I0813 01:14:42.740766 2774 kubelet.go:2351] "Pod admission denied" podUID="c06de1c4-6780-4b00-80f6-680ba46b76cb" pod="tigera-operator/tigera-operator-747864d56d-r9jnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.841660 kubelet[2774]: I0813 01:14:42.841433 2774 kubelet.go:2351] "Pod admission denied" podUID="bd85f5b8-5ddf-40fd-928d-d39a627f6b95" pod="tigera-operator/tigera-operator-747864d56d-jz96k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.950444 kubelet[2774]: I0813 01:14:42.949970 2774 kubelet.go:2351] "Pod admission denied" podUID="343db6ab-0890-4006-8fb9-4354a5e65e5e" pod="tigera-operator/tigera-operator-747864d56d-ml462" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.043749 kubelet[2774]: I0813 01:14:43.043658 2774 kubelet.go:2351] "Pod admission denied" podUID="ddde312b-9697-4374-bbe7-617673a76835" pod="tigera-operator/tigera-operator-747864d56d-59cfk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.141545 kubelet[2774]: I0813 01:14:43.140869 2774 kubelet.go:2351] "Pod admission denied" podUID="28bffd16-4a9e-4e37-9a71-9d8089797916" pod="tigera-operator/tigera-operator-747864d56d-hw4gw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.343776 kubelet[2774]: I0813 01:14:43.343662 2774 kubelet.go:2351] "Pod admission denied" podUID="5ba8b57f-a877-47cc-ac71-01c8401b9103" pod="tigera-operator/tigera-operator-747864d56d-z97cq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.441129 kubelet[2774]: I0813 01:14:43.441056 2774 kubelet.go:2351] "Pod admission denied" podUID="4c0b0f4c-cf6c-4c7a-85f2-994659452fe2" pod="tigera-operator/tigera-operator-747864d56d-vqsz7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.504497 containerd[1543]: time="2025-08-13T01:14:43.504437135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:43.547244 kubelet[2774]: I0813 01:14:43.547120 2774 kubelet.go:2351] "Pod admission denied" podUID="51b7b5cc-0cb5-4c72-a1cf-74ae4aead8e2" pod="tigera-operator/tigera-operator-747864d56d-wghkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.581882 containerd[1543]: time="2025-08-13T01:14:43.581780994Z" level=error msg="Failed to destroy network for sandbox \"c5b94bc788f5754096ddd4d3215cb951a5128d27f64258282f1d8313ac233268\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:43.584918 containerd[1543]: time="2025-08-13T01:14:43.584845349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b94bc788f5754096ddd4d3215cb951a5128d27f64258282f1d8313ac233268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:43.585654 kubelet[2774]: E0813 01:14:43.585606 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b94bc788f5754096ddd4d3215cb951a5128d27f64258282f1d8313ac233268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:43.585719 kubelet[2774]: E0813 01:14:43.585697 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b94bc788f5754096ddd4d3215cb951a5128d27f64258282f1d8313ac233268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:43.585757 kubelet[2774]: E0813 01:14:43.585728 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b94bc788f5754096ddd4d3215cb951a5128d27f64258282f1d8313ac233268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:43.585854 kubelet[2774]: E0813 01:14:43.585798 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5b94bc788f5754096ddd4d3215cb951a5128d27f64258282f1d8313ac233268\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" Aug 13 01:14:43.586789 systemd[1]: run-netns-cni\x2d8fc7eec7\x2d1cee\x2de5ad\x2d8ba1\x2d7d4c78fd44b5.mount: Deactivated successfully. Aug 13 01:14:43.639649 kubelet[2774]: I0813 01:14:43.639573 2774 kubelet.go:2351] "Pod admission denied" podUID="5c4a0939-3d47-457b-b2b8-db8f3511b05d" pod="tigera-operator/tigera-operator-747864d56d-t2785" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.742512 kubelet[2774]: I0813 01:14:43.742104 2774 kubelet.go:2351] "Pod admission denied" podUID="4461011a-dabf-439e-9928-8f6f144dc829" pod="tigera-operator/tigera-operator-747864d56d-595kd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.841464 kubelet[2774]: I0813 01:14:43.841388 2774 kubelet.go:2351] "Pod admission denied" podUID="0f1fade5-9307-498c-8c57-604f9a182d6b" pod="tigera-operator/tigera-operator-747864d56d-jsbcf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.946159 kubelet[2774]: I0813 01:14:43.946056 2774 kubelet.go:2351] "Pod admission denied" podUID="26cdb5d4-345c-45f4-ba5f-7634be797278" pod="tigera-operator/tigera-operator-747864d56d-rwhpv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.043290 kubelet[2774]: I0813 01:14:44.043123 2774 kubelet.go:2351] "Pod admission denied" podUID="6fc5ee77-3018-4ee7-81f8-8feb8250a220" pod="tigera-operator/tigera-operator-747864d56d-bcq9d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.096741 kubelet[2774]: I0813 01:14:44.096679 2774 kubelet.go:2351] "Pod admission denied" podUID="7dcec7b2-ab14-4b7f-a38c-a067c8717cac" pod="tigera-operator/tigera-operator-747864d56d-gk2ss" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.190324 kubelet[2774]: I0813 01:14:44.190262 2774 kubelet.go:2351] "Pod admission denied" podUID="a1dd061e-879b-4a73-8640-e2c2c6ea915e" pod="tigera-operator/tigera-operator-747864d56d-9jwkj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.291455 kubelet[2774]: I0813 01:14:44.291388 2774 kubelet.go:2351] "Pod admission denied" podUID="4187d4f3-3d03-46a7-b539-16bc61eb2d47" pod="tigera-operator/tigera-operator-747864d56d-6c2z8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.399766 kubelet[2774]: I0813 01:14:44.399160 2774 kubelet.go:2351] "Pod admission denied" podUID="24de359f-04a0-44e0-98e5-3aac92b1a3bf" pod="tigera-operator/tigera-operator-747864d56d-sl2x7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.593315 kubelet[2774]: I0813 01:14:44.593233 2774 kubelet.go:2351] "Pod admission denied" podUID="fea86fba-96f9-4765-a44d-5ccfd49f5aae" pod="tigera-operator/tigera-operator-747864d56d-8vj8w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.688898 kubelet[2774]: I0813 01:14:44.688836 2774 kubelet.go:2351] "Pod admission denied" podUID="a38bec87-53a6-4067-93ae-e4811ddb6a70" pod="tigera-operator/tigera-operator-747864d56d-hsx24" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.791844 kubelet[2774]: I0813 01:14:44.791768 2774 kubelet.go:2351] "Pod admission denied" podUID="f8a51cba-541f-441f-bdf0-f664a709c942" pod="tigera-operator/tigera-operator-747864d56d-m6gbw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.890213 kubelet[2774]: I0813 01:14:44.890108 2774 kubelet.go:2351] "Pod admission denied" podUID="75d6e614-bd93-4c4a-8cbb-43dc35fed864" pod="tigera-operator/tigera-operator-747864d56d-h8fdv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.991648 kubelet[2774]: I0813 01:14:44.991220 2774 kubelet.go:2351] "Pod admission denied" podUID="f4489da1-ada2-4f44-bbb5-cfa455eae5a1" pod="tigera-operator/tigera-operator-747864d56d-6tkz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.092186 kubelet[2774]: I0813 01:14:45.092104 2774 kubelet.go:2351] "Pod admission denied" podUID="2017f0da-7d4c-4fae-80c6-9f62b2d785dd" pod="tigera-operator/tigera-operator-747864d56d-g86kc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.146450 kubelet[2774]: I0813 01:14:45.145952 2774 kubelet.go:2351] "Pod admission denied" podUID="f9535469-c0cb-4024-9cab-8e700f03e036" pod="tigera-operator/tigera-operator-747864d56d-dz5l6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.244589 kubelet[2774]: I0813 01:14:45.243987 2774 kubelet.go:2351] "Pod admission denied" podUID="09455337-cf85-47fb-ad37-fd4465686671" pod="tigera-operator/tigera-operator-747864d56d-wtn4n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.350376 kubelet[2774]: I0813 01:14:45.350283 2774 kubelet.go:2351] "Pod admission denied" podUID="c57019ac-2e3f-4cfa-90a5-c56cf81796d4" pod="tigera-operator/tigera-operator-747864d56d-fntcw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.440710 kubelet[2774]: I0813 01:14:45.440640 2774 kubelet.go:2351] "Pod admission denied" podUID="15772e90-c4af-4d3c-80e3-6d37c8bdf36e" pod="tigera-operator/tigera-operator-747864d56d-fhblj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.504719 containerd[1543]: time="2025-08-13T01:14:45.504015436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:45.550005 kubelet[2774]: I0813 01:14:45.549538 2774 kubelet.go:2351] "Pod admission denied" podUID="b7a2c313-f61e-4dbf-b1f8-586f0f5409cc" pod="tigera-operator/tigera-operator-747864d56d-w57md" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.594996 containerd[1543]: time="2025-08-13T01:14:45.594828664Z" level=error msg="Failed to destroy network for sandbox \"152456f1cbdc9e3e0bf08ff17c596fc3e4d23928bbb96ecf09a5b0babb7bff37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:45.597222 containerd[1543]: time="2025-08-13T01:14:45.597191631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"152456f1cbdc9e3e0bf08ff17c596fc3e4d23928bbb96ecf09a5b0babb7bff37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:45.597621 kubelet[2774]: E0813 01:14:45.597580 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152456f1cbdc9e3e0bf08ff17c596fc3e4d23928bbb96ecf09a5b0babb7bff37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:45.597778 kubelet[2774]: E0813 01:14:45.597756 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152456f1cbdc9e3e0bf08ff17c596fc3e4d23928bbb96ecf09a5b0babb7bff37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:45.597851 kubelet[2774]: E0813 01:14:45.597834 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152456f1cbdc9e3e0bf08ff17c596fc3e4d23928bbb96ecf09a5b0babb7bff37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:45.597976 kubelet[2774]: E0813 01:14:45.597939 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"152456f1cbdc9e3e0bf08ff17c596fc3e4d23928bbb96ecf09a5b0babb7bff37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:14:45.599827 systemd[1]: run-netns-cni\x2d1503bebe\x2d5db1\x2d2cd6\x2ddeb5\x2d2624d22e7add.mount: Deactivated successfully. Aug 13 01:14:45.642277 kubelet[2774]: I0813 01:14:45.642202 2774 kubelet.go:2351] "Pod admission denied" podUID="e369b859-12e6-4859-ae48-22d8f0d429a8" pod="tigera-operator/tigera-operator-747864d56d-kc4rs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.742790 kubelet[2774]: I0813 01:14:45.742723 2774 kubelet.go:2351] "Pod admission denied" podUID="de36dd97-794f-4790-ae09-3b87433f5925" pod="tigera-operator/tigera-operator-747864d56d-j4zwq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.795882 kubelet[2774]: I0813 01:14:45.795675 2774 kubelet.go:2351] "Pod admission denied" podUID="a3935a41-447a-4160-b33c-9d869676f90c" pod="tigera-operator/tigera-operator-747864d56d-kt5x7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.889454 kubelet[2774]: I0813 01:14:45.889320 2774 kubelet.go:2351] "Pod admission denied" podUID="2482b0bc-5f38-4881-8fa6-a3cc5a7638d6" pod="tigera-operator/tigera-operator-747864d56d-gqwf9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.095600 kubelet[2774]: I0813 01:14:46.095396 2774 kubelet.go:2351] "Pod admission denied" podUID="56f57fe0-1073-4fe1-a4b5-5373ae6f4537" pod="tigera-operator/tigera-operator-747864d56d-627dl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.193880 kubelet[2774]: I0813 01:14:46.193820 2774 kubelet.go:2351] "Pod admission denied" podUID="8e20a801-b9cd-4249-a1c7-8b57d82fdf67" pod="tigera-operator/tigera-operator-747864d56d-5bjxm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.253897 kubelet[2774]: I0813 01:14:46.253226 2774 kubelet.go:2351] "Pod admission denied" podUID="8daef5fe-70a5-4caf-872f-f6b1c31e3da4" pod="tigera-operator/tigera-operator-747864d56d-g87bs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.342567 kubelet[2774]: I0813 01:14:46.342275 2774 kubelet.go:2351] "Pod admission denied" podUID="21358b62-3805-4b86-a0cd-f50877906535" pod="tigera-operator/tigera-operator-747864d56d-gqmqg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.440840 kubelet[2774]: I0813 01:14:46.440775 2774 kubelet.go:2351] "Pod admission denied" podUID="d4be62a8-cc62-4a60-8abb-761b75263eca" pod="tigera-operator/tigera-operator-747864d56d-9bq6m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.508181 kubelet[2774]: E0813 01:14:46.507282 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:46.508622 containerd[1543]: time="2025-08-13T01:14:46.507756552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:46.546561 kubelet[2774]: I0813 01:14:46.546482 2774 kubelet.go:2351] "Pod admission denied" podUID="89af0273-dc06-459d-acf2-eccaf0d0c77d" pod="tigera-operator/tigera-operator-747864d56d-xpdzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.585480 containerd[1543]: time="2025-08-13T01:14:46.585411857Z" level=error msg="Failed to destroy network for sandbox \"f2ed3e1c74cef578c7aa013671246842a3e9ddab9655364f011daf9afca1176c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:46.588456 containerd[1543]: time="2025-08-13T01:14:46.586731311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2ed3e1c74cef578c7aa013671246842a3e9ddab9655364f011daf9afca1176c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:46.589709 kubelet[2774]: E0813 01:14:46.589644 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2ed3e1c74cef578c7aa013671246842a3e9ddab9655364f011daf9afca1176c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:46.589809 kubelet[2774]: E0813 01:14:46.589728 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2ed3e1c74cef578c7aa013671246842a3e9ddab9655364f011daf9afca1176c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:46.589809 kubelet[2774]: E0813 01:14:46.589759 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2ed3e1c74cef578c7aa013671246842a3e9ddab9655364f011daf9afca1176c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:46.589901 kubelet[2774]: E0813 01:14:46.589817 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2ed3e1c74cef578c7aa013671246842a3e9ddab9655364f011daf9afca1176c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:14:46.591619 systemd[1]: run-netns-cni\x2de28b7e0e\x2d6afc\x2d1626\x2ddfd6\x2d95e97aefa274.mount: Deactivated successfully. Aug 13 01:14:46.642025 kubelet[2774]: I0813 01:14:46.641937 2774 kubelet.go:2351] "Pod admission denied" podUID="27f3dd0f-3c75-430a-8446-efa90d7e1543" pod="tigera-operator/tigera-operator-747864d56d-mnmgz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.742593 kubelet[2774]: I0813 01:14:46.742166 2774 kubelet.go:2351] "Pod admission denied" podUID="d9ddbfd5-1f47-4fd7-b579-13d6e32cf851" pod="tigera-operator/tigera-operator-747864d56d-lmntm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.847274 kubelet[2774]: I0813 01:14:46.847185 2774 kubelet.go:2351] "Pod admission denied" podUID="3c4ded0a-137c-45aa-a773-6725188d586c" pod="tigera-operator/tigera-operator-747864d56d-qmxjh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.943840 kubelet[2774]: I0813 01:14:46.943720 2774 kubelet.go:2351] "Pod admission denied" podUID="3f839f1f-1c27-43f0-8567-5d45dfbdeab9" pod="tigera-operator/tigera-operator-747864d56d-xtq59" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.043335 kubelet[2774]: I0813 01:14:47.043137 2774 kubelet.go:2351] "Pod admission denied" podUID="9f58c689-630c-44e6-8450-31089484056e" pod="tigera-operator/tigera-operator-747864d56d-pqzd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.144798 kubelet[2774]: I0813 01:14:47.144709 2774 kubelet.go:2351] "Pod admission denied" podUID="45dc49fa-c935-40f7-a946-0ea91a804cc9" pod="tigera-operator/tigera-operator-747864d56d-tbcx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.243008 kubelet[2774]: I0813 01:14:47.242931 2774 kubelet.go:2351] "Pod admission denied" podUID="297aebd6-b6c0-49e5-853b-d0a8ab417e98" pod="tigera-operator/tigera-operator-747864d56d-r8875" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.342056 kubelet[2774]: I0813 01:14:47.341880 2774 kubelet.go:2351] "Pod admission denied" podUID="4095b451-06ef-495e-9c0c-c258251861ca" pod="tigera-operator/tigera-operator-747864d56d-bn5lp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.440653 kubelet[2774]: I0813 01:14:47.440592 2774 kubelet.go:2351] "Pod admission denied" podUID="fb70660d-657d-49d7-8886-85b7dd45fc62" pod="tigera-operator/tigera-operator-747864d56d-bksxt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.551375 kubelet[2774]: I0813 01:14:47.550602 2774 kubelet.go:2351] "Pod admission denied" podUID="79b1793e-ed8d-44e2-953e-4a6aae2e430a" pod="tigera-operator/tigera-operator-747864d56d-58n5p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.651881 kubelet[2774]: I0813 01:14:47.651728 2774 kubelet.go:2351] "Pod admission denied" podUID="693e1fc7-1f73-45ee-8e8f-29411bc785d2" pod="tigera-operator/tigera-operator-747864d56d-bzd9r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.694127 kubelet[2774]: I0813 01:14:47.694069 2774 kubelet.go:2351] "Pod admission denied" podUID="fae6f3fe-9867-438c-9ac3-3dc3ec3f59de" pod="tigera-operator/tigera-operator-747864d56d-rzjlh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.790740 kubelet[2774]: I0813 01:14:47.790680 2774 kubelet.go:2351] "Pod admission denied" podUID="e248ff8d-8623-4337-b56c-6041917ababc" pod="tigera-operator/tigera-operator-747864d56d-f7pcj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.892116 kubelet[2774]: I0813 01:14:47.892053 2774 kubelet.go:2351] "Pod admission denied" podUID="b34dae3c-572d-4487-97df-f573f9dff8f2" pod="tigera-operator/tigera-operator-747864d56d-kk2k9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.993031 kubelet[2774]: I0813 01:14:47.992966 2774 kubelet.go:2351] "Pod admission denied" podUID="e86db460-5d2d-4f11-bcfe-bd9c9b308e96" pod="tigera-operator/tigera-operator-747864d56d-qf857" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.191953 kubelet[2774]: I0813 01:14:48.191893 2774 kubelet.go:2351] "Pod admission denied" podUID="3a11aaf5-20ef-40e9-8743-a25a241e904b" pod="tigera-operator/tigera-operator-747864d56d-csplf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.292644 kubelet[2774]: I0813 01:14:48.292470 2774 kubelet.go:2351] "Pod admission denied" podUID="caca8fc1-02d1-4d70-b9ec-a82f6cc72c90" pod="tigera-operator/tigera-operator-747864d56d-2khx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.392277 kubelet[2774]: I0813 01:14:48.392209 2774 kubelet.go:2351] "Pod admission denied" podUID="9b761250-b018-48e8-ba06-e0e50f5550c6" pod="tigera-operator/tigera-operator-747864d56d-wb457" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.502378 kubelet[2774]: I0813 01:14:48.501699 2774 kubelet.go:2351] "Pod admission denied" podUID="e93d068b-e656-4602-a21b-a31e7f995664" pod="tigera-operator/tigera-operator-747864d56d-m7rxn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.510164 kubelet[2774]: E0813 01:14:48.510110 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount929401159: write /var/lib/containerd/tmpmounts/containerd-mount929401159/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-9bc9j" podUID="0909eaba-89ba-4b02-b2f3-a17e3b6e2afc" Aug 13 01:14:48.595455 kubelet[2774]: I0813 01:14:48.595252 2774 kubelet.go:2351] "Pod admission denied" podUID="058e9038-85e8-47b0-8478-c9489870e172" pod="tigera-operator/tigera-operator-747864d56d-8dd5b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.691791 kubelet[2774]: I0813 01:14:48.691719 2774 kubelet.go:2351] "Pod admission denied" podUID="6db9a976-9861-4f0f-bf6c-d8b4a31cdd93" pod="tigera-operator/tigera-operator-747864d56d-2gs4n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.790743 kubelet[2774]: I0813 01:14:48.790682 2774 kubelet.go:2351] "Pod admission denied" podUID="01d5275c-3cc0-4e47-9771-58a9598e22ee" pod="tigera-operator/tigera-operator-747864d56d-khdr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.895109 kubelet[2774]: I0813 01:14:48.894894 2774 kubelet.go:2351] "Pod admission denied" podUID="462e9ecd-02b1-4fba-a667-a97c41adf577" pod="tigera-operator/tigera-operator-747864d56d-dnv57" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.994470 kubelet[2774]: I0813 01:14:48.994406 2774 kubelet.go:2351] "Pod admission denied" podUID="55c1e26b-00ce-4ed8-aaf5-242cbdeb8005" pod="tigera-operator/tigera-operator-747864d56d-f7bzw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:49.092213 kubelet[2774]: I0813 01:14:49.092148 2774 kubelet.go:2351] "Pod admission denied" podUID="6c92e119-2a13-4263-9aec-cfcfff0a673e" pod="tigera-operator/tigera-operator-747864d56d-pn42x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:49.191770 kubelet[2774]: I0813 01:14:49.191684 2774 kubelet.go:2351] "Pod admission denied" podUID="d9d5499f-ec8b-4be4-a8cb-ebdb9f6e28d8" pod="tigera-operator/tigera-operator-747864d56d-j67wl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:49.297169 kubelet[2774]: I0813 01:14:49.297098 2774 kubelet.go:2351] "Pod admission denied" podUID="47442a2f-f868-4673-b461-6b5aad4e6eb5" pod="tigera-operator/tigera-operator-747864d56d-tsgvt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:49.397534 kubelet[2774]: I0813 01:14:49.397459 2774 kubelet.go:2351] "Pod admission denied" podUID="3004b4b4-126b-4b04-84c5-90c70b115a6f" pod="tigera-operator/tigera-operator-747864d56d-cz6zh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:49.491091 kubelet[2774]: I0813 01:14:49.490573 2774 kubelet.go:2351] "Pod admission denied" podUID="818704e7-ecb0-4a5a-aadd-416cf7a5cb04" pod="tigera-operator/tigera-operator-747864d56d-5zfjj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:49.594855 kubelet[2774]: I0813 01:14:49.594785 2774 kubelet.go:2351] "Pod admission denied" podUID="8ae7aa92-97a5-4e96-aa02-df227883e432" pod="tigera-operator/tigera-operator-747864d56d-2x8pp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:49.791237 kubelet[2774]: I0813 01:14:49.790882 2774 kubelet.go:2351] "Pod admission denied" podUID="dfcdebdb-b8bd-41f6-8b02-d2a5814743e9" pod="tigera-operator/tigera-operator-747864d56d-898nt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:49.892700 kubelet[2774]: I0813 01:14:49.892627 2774 kubelet.go:2351] "Pod admission denied" podUID="1376ff06-4e94-45c7-bb48-bf1feee6037a" pod="tigera-operator/tigera-operator-747864d56d-4zls5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:49.992984 kubelet[2774]: I0813 01:14:49.992910 2774 kubelet.go:2351] "Pod admission denied" podUID="6074956a-3cb8-43fa-8f27-bfbb7874d8d9" pod="tigera-operator/tigera-operator-747864d56d-ggp6t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:50.193888 kubelet[2774]: I0813 01:14:50.193822 2774 kubelet.go:2351] "Pod admission denied" podUID="074e0305-4287-4434-9648-a319b880486f" pod="tigera-operator/tigera-operator-747864d56d-557sr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:50.293287 kubelet[2774]: I0813 01:14:50.293209 2774 kubelet.go:2351] "Pod admission denied" podUID="c8c56760-2d10-42c3-8001-6140d88c5f6f" pod="tigera-operator/tigera-operator-747864d56d-jjp6w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:50.400564 kubelet[2774]: I0813 01:14:50.400277 2774 kubelet.go:2351] "Pod admission denied" podUID="2803c0b1-4464-4832-8b53-0fcc874041cf" pod="tigera-operator/tigera-operator-747864d56d-lfnl7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:50.496233 kubelet[2774]: I0813 01:14:50.496021 2774 kubelet.go:2351] "Pod admission denied" podUID="048728fb-2567-4e46-8d9f-cdf6d65603cf" pod="tigera-operator/tigera-operator-747864d56d-v4sd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:50.592696 kubelet[2774]: I0813 01:14:50.592380 2774 kubelet.go:2351] "Pod admission denied" podUID="d0f27bd0-1808-4bb2-a56f-c5a19e94357f" pod="tigera-operator/tigera-operator-747864d56d-8pvhs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:50.671968 update_engine[1517]: I20250813 01:14:50.671599 1517 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:14:50.673245 update_engine[1517]: I20250813 01:14:50.672413 1517 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:14:50.673458 update_engine[1517]: I20250813 01:14:50.673434 1517 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:14:50.674303 update_engine[1517]: E20250813 01:14:50.674274 1517 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:14:50.674687 update_engine[1517]: I20250813 01:14:50.674541 1517 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 01:14:50.674687 update_engine[1517]: I20250813 01:14:50.674552 1517 omaha_request_action.cc:617] Omaha request response: Aug 13 01:14:50.675306 update_engine[1517]: E20250813 01:14:50.674732 1517 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.674785 1517 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.674793 1517 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.674799 1517 update_attempter.cc:306] Processing Done. Aug 13 01:14:50.675306 update_engine[1517]: E20250813 01:14:50.674816 1517 update_attempter.cc:619] Update failed. Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.674822 1517 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.674829 1517 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.674834 1517 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.674921 1517 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.674945 1517 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.674952 1517 omaha_request_action.cc:272] Request: Aug 13 01:14:50.675306 update_engine[1517]: Aug 13 01:14:50.675306 update_engine[1517]: Aug 13 01:14:50.675306 update_engine[1517]: Aug 13 01:14:50.675306 update_engine[1517]: Aug 13 01:14:50.675306 update_engine[1517]: Aug 13 01:14:50.675306 update_engine[1517]: Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.674958 1517 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:14:50.675306 update_engine[1517]: I20250813 01:14:50.675117 1517 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:14:50.675969 update_engine[1517]: I20250813 01:14:50.675276 1517 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:14:50.675997 locksmithd[1550]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 01:14:50.676958 update_engine[1517]: E20250813 01:14:50.676610 1517 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:14:50.676958 update_engine[1517]: I20250813 01:14:50.676714 1517 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 01:14:50.676958 update_engine[1517]: I20250813 01:14:50.676725 1517 omaha_request_action.cc:617] Omaha request response: Aug 13 01:14:50.676958 update_engine[1517]: I20250813 01:14:50.676731 1517 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:14:50.676958 update_engine[1517]: I20250813 01:14:50.676736 1517 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:14:50.676958 update_engine[1517]: I20250813 01:14:50.676740 1517 update_attempter.cc:306] Processing Done. Aug 13 01:14:50.676958 update_engine[1517]: I20250813 01:14:50.676746 1517 update_attempter.cc:310] Error event sent. Aug 13 01:14:50.676958 update_engine[1517]: I20250813 01:14:50.676755 1517 update_check_scheduler.cc:74] Next update check in 48m27s Aug 13 01:14:50.677556 locksmithd[1550]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 01:14:50.694025 kubelet[2774]: I0813 01:14:50.693963 2774 kubelet.go:2351] "Pod admission denied" podUID="c238466c-c890-4c52-83b0-a645c9746e20" pod="tigera-operator/tigera-operator-747864d56d-wpnlw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:50.791521 kubelet[2774]: I0813 01:14:50.791328 2774 kubelet.go:2351] "Pod admission denied" podUID="865b5367-218a-46e8-84ca-546bedb391db" pod="tigera-operator/tigera-operator-747864d56d-jn27c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:50.889586 kubelet[2774]: I0813 01:14:50.889518 2774 kubelet.go:2351] "Pod admission denied" podUID="8c91eed0-5614-485f-b859-a5d66c26b2c1" pod="tigera-operator/tigera-operator-747864d56d-hcgjl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:50.996303 kubelet[2774]: I0813 01:14:50.996215 2774 kubelet.go:2351] "Pod admission denied" podUID="63b67fc0-1098-4e44-a2ab-8feae894cbb7" pod="tigera-operator/tigera-operator-747864d56d-59689" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:51.195170 kubelet[2774]: I0813 01:14:51.195088 2774 kubelet.go:2351] "Pod admission denied" podUID="3872bd29-0eb8-45bb-a94c-ac24abc005b2" pod="tigera-operator/tigera-operator-747864d56d-tvrrt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:51.293426 kubelet[2774]: I0813 01:14:51.293332 2774 kubelet.go:2351] "Pod admission denied" podUID="668d3dc3-eaf5-4524-bbee-f3bd6f02292f" pod="tigera-operator/tigera-operator-747864d56d-rcjhm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:51.392130 kubelet[2774]: I0813 01:14:51.392053 2774 kubelet.go:2351] "Pod admission denied" podUID="ab67276f-179e-4360-bcb2-e01481d011b8" pod="tigera-operator/tigera-operator-747864d56d-2h8tx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:51.496565 kubelet[2774]: I0813 01:14:51.496379 2774 kubelet.go:2351] "Pod admission denied" podUID="3547163c-d309-4006-be06-8a0d71569e82" pod="tigera-operator/tigera-operator-747864d56d-6f2lv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:51.549121 kubelet[2774]: I0813 01:14:51.549051 2774 kubelet.go:2351] "Pod admission denied" podUID="0199eaaf-f33c-4ec0-ae98-0ecfffb5951d" pod="tigera-operator/tigera-operator-747864d56d-n6gmg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:51.643159 kubelet[2774]: I0813 01:14:51.643077 2774 kubelet.go:2351] "Pod admission denied" podUID="374c9f76-e605-499d-adfe-00626b26c7d3" pod="tigera-operator/tigera-operator-747864d56d-hbvr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:51.742299 kubelet[2774]: I0813 01:14:51.742232 2774 kubelet.go:2351] "Pod admission denied" podUID="82a4610c-8b84-446f-adab-cb062933c024" pod="tigera-operator/tigera-operator-747864d56d-vs4fv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:51.847406 kubelet[2774]: I0813 01:14:51.847100 2774 kubelet.go:2351] "Pod admission denied" podUID="0b66c8be-1f4f-4a68-b4bb-4dcb518f1deb" pod="tigera-operator/tigera-operator-747864d56d-mnq5h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:52.043285 kubelet[2774]: I0813 01:14:52.043218 2774 kubelet.go:2351] "Pod admission denied" podUID="37e3de0d-929d-4bb5-9cf3-a49c54b4e6cd" pod="tigera-operator/tigera-operator-747864d56d-p77kl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:52.143635 kubelet[2774]: I0813 01:14:52.143462 2774 kubelet.go:2351] "Pod admission denied" podUID="b68ee3d5-0c64-4f5d-aaf3-537ca123dc9e" pod="tigera-operator/tigera-operator-747864d56d-9d9p8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:52.244681 kubelet[2774]: I0813 01:14:52.244604 2774 kubelet.go:2351] "Pod admission denied" podUID="af1e7c01-a1a4-4d80-b9ee-fb2c50e03cc1" pod="tigera-operator/tigera-operator-747864d56d-vbfwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:52.347563 kubelet[2774]: I0813 01:14:52.347488 2774 kubelet.go:2351] "Pod admission denied" podUID="8fa11788-1425-43bd-aefe-4030c5637c59" pod="tigera-operator/tigera-operator-747864d56d-vnvl4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:52.408025 kubelet[2774]: I0813 01:14:52.406315 2774 kubelet.go:2351] "Pod admission denied" podUID="baab015c-90b2-4771-827c-55ee82ca2ed2" pod="tigera-operator/tigera-operator-747864d56d-fwd2l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:52.495346 kubelet[2774]: I0813 01:14:52.495289 2774 kubelet.go:2351] "Pod admission denied" podUID="2dc9baae-d4ff-4d1c-ab55-ee123689ebc3" pod="tigera-operator/tigera-operator-747864d56d-mfr9n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:52.520063 kubelet[2774]: I0813 01:14:52.520012 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:14:52.520063 kubelet[2774]: I0813 01:14:52.520064 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:14:52.524822 kubelet[2774]: I0813 01:14:52.524752 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:14:52.544659 kubelet[2774]: I0813 01:14:52.544306 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:14:52.545140 kubelet[2774]: I0813 01:14:52.544918 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-c5f875d88-fvdcx","kube-system/coredns-668d6bf9bc-wp8jf","kube-system/coredns-668d6bf9bc-qnwfr","calico-system/csi-node-driver-bc7dg","calico-system/calico-node-9bc9j","calico-system/calico-typha-bf7d6589c-n48dd","kube-system/kube-controller-manager-172-234-199-8","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8"] Aug 13 01:14:52.545140 kubelet[2774]: E0813 01:14:52.544957 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:52.545140 kubelet[2774]: E0813 01:14:52.544969 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:52.545140 kubelet[2774]: E0813 01:14:52.544977 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:14:52.545140 kubelet[2774]: E0813 01:14:52.544985 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:14:52.545140 kubelet[2774]: E0813 01:14:52.544992 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-9bc9j" Aug 13 01:14:52.545140 kubelet[2774]: E0813 01:14:52.545003 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf7d6589c-n48dd" Aug 13 01:14:52.545140 kubelet[2774]: E0813 01:14:52.545013 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:14:52.545140 kubelet[2774]: E0813 01:14:52.545026 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8p8zx" Aug 13 01:14:52.545140 kubelet[2774]: E0813 01:14:52.545041 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:14:52.545140 kubelet[2774]: E0813 01:14:52.545049 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:14:52.545140 kubelet[2774]: I0813 01:14:52.545061 2774 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:14:52.695594 kubelet[2774]: I0813 01:14:52.694276 2774 kubelet.go:2351] "Pod admission denied" podUID="e324d17d-222d-48d9-bc12-f5dd5e1b9dbb" pod="tigera-operator/tigera-operator-747864d56d-4r58n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:52.792486 kubelet[2774]: I0813 01:14:52.792327 2774 kubelet.go:2351] "Pod admission denied" podUID="3fbb67ce-22f8-4f3e-b7a6-78fb10a4c738" pod="tigera-operator/tigera-operator-747864d56d-zj6vf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:52.892268 kubelet[2774]: I0813 01:14:52.892177 2774 kubelet.go:2351] "Pod admission denied" podUID="097b2afc-e976-4d41-94b5-703f3ebb5ff9" pod="tigera-operator/tigera-operator-747864d56d-8c7ss" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:53.098118 kubelet[2774]: I0813 01:14:53.097910 2774 kubelet.go:2351] "Pod admission denied" podUID="01a5d5fc-36b2-4d75-8965-fad8f12335a0" pod="tigera-operator/tigera-operator-747864d56d-ldtc5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:53.191215 kubelet[2774]: I0813 01:14:53.191133 2774 kubelet.go:2351] "Pod admission denied" podUID="c12857a5-036b-4bff-b6eb-3d99acd597ba" pod="tigera-operator/tigera-operator-747864d56d-m569k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:53.301527 kubelet[2774]: I0813 01:14:53.300867 2774 kubelet.go:2351] "Pod admission denied" podUID="078734b9-8ff2-4bf5-8c29-4a7ae5c77fc6" pod="tigera-operator/tigera-operator-747864d56d-tqmv5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:53.399011 kubelet[2774]: I0813 01:14:53.397078 2774 kubelet.go:2351] "Pod admission denied" podUID="77c36d9c-bd12-4559-96dd-f8951553a567" pod="tigera-operator/tigera-operator-747864d56d-wlw8g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:53.442323 kubelet[2774]: I0813 01:14:53.442256 2774 kubelet.go:2351] "Pod admission denied" podUID="f5f20e49-e630-4e52-a32d-53b10a2d2a44" pod="tigera-operator/tigera-operator-747864d56d-z5qtt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:53.503858 kubelet[2774]: E0813 01:14:53.503805 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:14:53.504827 containerd[1543]: time="2025-08-13T01:14:53.504680988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:53.555028 kubelet[2774]: I0813 01:14:53.554939 2774 kubelet.go:2351] "Pod admission denied" podUID="ad8d5321-dff6-44b3-b1a8-288617f0cb02" pod="tigera-operator/tigera-operator-747864d56d-5r428" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:53.597394 containerd[1543]: time="2025-08-13T01:14:53.596165113Z" level=error msg="Failed to destroy network for sandbox \"73dc51c399280d55da890219fd654db317fa0700c8319c1bbeabce1c397ed3b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:53.599103 containerd[1543]: time="2025-08-13T01:14:53.599030781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73dc51c399280d55da890219fd654db317fa0700c8319c1bbeabce1c397ed3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:53.600738 kubelet[2774]: E0813 01:14:53.600537 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73dc51c399280d55da890219fd654db317fa0700c8319c1bbeabce1c397ed3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:53.600738 kubelet[2774]: E0813 01:14:53.600656 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73dc51c399280d55da890219fd654db317fa0700c8319c1bbeabce1c397ed3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:53.600738 kubelet[2774]: E0813 01:14:53.600685 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73dc51c399280d55da890219fd654db317fa0700c8319c1bbeabce1c397ed3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:14:53.600874 kubelet[2774]: E0813 01:14:53.600782 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73dc51c399280d55da890219fd654db317fa0700c8319c1bbeabce1c397ed3b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wp8jf" podUID="56ce87f4-747f-470b-8388-a8400bdda009" Aug 13 01:14:53.601713 systemd[1]: run-netns-cni\x2d7ade6fdc\x2d2784\x2daba9\x2db876\x2d334896f27c8a.mount: Deactivated successfully. Aug 13 01:14:53.653165 kubelet[2774]: I0813 01:14:53.651988 2774 kubelet.go:2351] "Pod admission denied" podUID="4197f9e5-cc55-48a9-bce9-b6872f4a39a1" pod="tigera-operator/tigera-operator-747864d56d-mk7c8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:53.746186 kubelet[2774]: I0813 01:14:53.745799 2774 kubelet.go:2351] "Pod admission denied" podUID="cb1c2ecc-bc96-4262-86c3-d1df1164eb77" pod="tigera-operator/tigera-operator-747864d56d-575vk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:53.844244 kubelet[2774]: I0813 01:14:53.844171 2774 kubelet.go:2351] "Pod admission denied" podUID="9aa0cdd8-dfe5-48af-9b54-83fcd9a8aacb" pod="tigera-operator/tigera-operator-747864d56d-dzf7x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:53.950166 kubelet[2774]: I0813 01:14:53.950112 2774 kubelet.go:2351] "Pod admission denied" podUID="617a29fa-abc5-46cc-8b06-872652ca9fba" pod="tigera-operator/tigera-operator-747864d56d-jm5dl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:54.062125 kubelet[2774]: I0813 01:14:54.061261 2774 kubelet.go:2351] "Pod admission denied" podUID="886fa90e-28b5-45ac-b930-073d031cc679" pod="tigera-operator/tigera-operator-747864d56d-v2zzs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:54.143444 kubelet[2774]: I0813 01:14:54.143313 2774 kubelet.go:2351] "Pod admission denied" podUID="0d06538d-346f-4833-b11e-947ff9020c5f" pod="tigera-operator/tigera-operator-747864d56d-s9tc8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:54.244494 kubelet[2774]: I0813 01:14:54.244303 2774 kubelet.go:2351] "Pod admission denied" podUID="5e6e4063-a7b7-43a7-9f45-2723265cc6e3" pod="tigera-operator/tigera-operator-747864d56d-bnnqn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:54.345113 kubelet[2774]: I0813 01:14:54.345054 2774 kubelet.go:2351] "Pod admission denied" podUID="d58ccfa2-8ecc-4c76-a42b-2870329ddf05" pod="tigera-operator/tigera-operator-747864d56d-tqq9d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:54.449379 kubelet[2774]: I0813 01:14:54.449233 2774 kubelet.go:2351] "Pod admission denied" podUID="9c908fbf-12b3-4dca-bc8d-e4dee0be6313" pod="tigera-operator/tigera-operator-747864d56d-7rp77" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:54.545742 kubelet[2774]: I0813 01:14:54.544830 2774 kubelet.go:2351] "Pod admission denied" podUID="75fe838e-74e5-4715-82fb-24bf854ef499" pod="tigera-operator/tigera-operator-747864d56d-sfxxq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:54.644062 kubelet[2774]: I0813 01:14:54.643991 2774 kubelet.go:2351] "Pod admission denied" podUID="2d8df36c-71b6-4538-8db0-563f83802c0a" pod="tigera-operator/tigera-operator-747864d56d-9c7w7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:54.743090 kubelet[2774]: I0813 01:14:54.743027 2774 kubelet.go:2351] "Pod admission denied" podUID="3ec1731f-5160-4d26-8e5c-4d1af3f4b8ea" pod="tigera-operator/tigera-operator-747864d56d-qcb8x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:54.850290 kubelet[2774]: I0813 01:14:54.850113 2774 kubelet.go:2351] "Pod admission denied" podUID="c0c619b1-aa42-47cc-bdc9-f4181073fc63" pod="tigera-operator/tigera-operator-747864d56d-2hmn8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:54.945736 kubelet[2774]: I0813 01:14:54.945689 2774 kubelet.go:2351] "Pod admission denied" podUID="fa595bf7-2fa8-43e9-be16-0f83b0093da6" pod="tigera-operator/tigera-operator-747864d56d-sxtbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:55.047307 kubelet[2774]: I0813 01:14:55.047254 2774 kubelet.go:2351] "Pod admission denied" podUID="84fcb9b8-5e1c-4313-ab0f-eeb028acebae" pod="tigera-operator/tigera-operator-747864d56d-7tbcz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:55.144831 kubelet[2774]: I0813 01:14:55.144302 2774 kubelet.go:2351] "Pod admission denied" podUID="0a773c0c-f45d-4d0f-986f-ea299fc7bd96" pod="tigera-operator/tigera-operator-747864d56d-5699z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:55.354917 kubelet[2774]: I0813 01:14:55.354146 2774 kubelet.go:2351] "Pod admission denied" podUID="4a026517-2a48-4e02-838d-e0cbc17d633f" pod="tigera-operator/tigera-operator-747864d56d-hgxv7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:55.445220 kubelet[2774]: I0813 01:14:55.445082 2774 kubelet.go:2351] "Pod admission denied" podUID="31bb24b2-a369-4422-8677-22dd92a4def2" pod="tigera-operator/tigera-operator-747864d56d-5f5cj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:55.546898 kubelet[2774]: I0813 01:14:55.546838 2774 kubelet.go:2351] "Pod admission denied" podUID="a2830994-1dda-4949-88d8-dda5f6995109" pod="tigera-operator/tigera-operator-747864d56d-g27rm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:55.642479 kubelet[2774]: I0813 01:14:55.642370 2774 kubelet.go:2351] "Pod admission denied" podUID="4b4a8d99-c545-4a96-8288-2ce9409f6419" pod="tigera-operator/tigera-operator-747864d56d-48sbs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:55.804462 kubelet[2774]: I0813 01:14:55.803141 2774 kubelet.go:2351] "Pod admission denied" podUID="8077b977-848d-4412-8c51-d911f9ad14d9" pod="tigera-operator/tigera-operator-747864d56d-46gcr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:55.871568 kubelet[2774]: I0813 01:14:55.871233 2774 kubelet.go:2351] "Pod admission denied" podUID="246f3b6f-67d1-40aa-9b33-2a6dd4b64886" pod="tigera-operator/tigera-operator-747864d56d-swwtr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:55.996077 kubelet[2774]: I0813 01:14:55.996022 2774 kubelet.go:2351] "Pod admission denied" podUID="e08003fe-4ade-4f9d-a12a-97df5f85f1a8" pod="tigera-operator/tigera-operator-747864d56d-gcq68" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:56.094722 kubelet[2774]: I0813 01:14:56.094288 2774 kubelet.go:2351] "Pod admission denied" podUID="4243c399-1614-4f7f-9d5a-4493cd87b9a9" pod="tigera-operator/tigera-operator-747864d56d-lhknp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:56.208386 kubelet[2774]: I0813 01:14:56.208310 2774 kubelet.go:2351] "Pod admission denied" podUID="1fc62bf4-c13a-4a96-918c-cc96151bba9f" pod="tigera-operator/tigera-operator-747864d56d-ttn2b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:56.299718 kubelet[2774]: I0813 01:14:56.299657 2774 kubelet.go:2351] "Pod admission denied" podUID="0dac4f1a-511e-4041-ace3-8e7071292539" pod="tigera-operator/tigera-operator-747864d56d-dsm7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:56.504261 kubelet[2774]: I0813 01:14:56.503403 2774 kubelet.go:2351] "Pod admission denied" podUID="6f2d0993-3378-4855-865a-9992a3272be5" pod="tigera-operator/tigera-operator-747864d56d-x7fck" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:56.602892 kubelet[2774]: I0813 01:14:56.602850 2774 kubelet.go:2351] "Pod admission denied" podUID="a9855a79-061a-4bc8-b689-99fa35c47f35" pod="tigera-operator/tigera-operator-747864d56d-zjhs6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:56.695894 kubelet[2774]: I0813 01:14:56.695826 2774 kubelet.go:2351] "Pod admission denied" podUID="879c3791-1e5e-4fd4-ac2a-bf118088167b" pod="tigera-operator/tigera-operator-747864d56d-5rgk8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:56.896215 kubelet[2774]: I0813 01:14:56.895986 2774 kubelet.go:2351] "Pod admission denied" podUID="bd7bf1b2-1382-4428-a471-cc40ce2d81c7" pod="tigera-operator/tigera-operator-747864d56d-r6fkz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:56.994077 kubelet[2774]: I0813 01:14:56.994008 2774 kubelet.go:2351] "Pod admission denied" podUID="56963c13-433e-4710-8152-9e6b19e0836a" pod="tigera-operator/tigera-operator-747864d56d-xmtpr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:57.119363 kubelet[2774]: I0813 01:14:57.119237 2774 kubelet.go:2351] "Pod admission denied" podUID="0a93a805-4904-4d79-a28e-c5edcf71d98a" pod="tigera-operator/tigera-operator-747864d56d-lbpdr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:57.294668 kubelet[2774]: I0813 01:14:57.294599 2774 kubelet.go:2351] "Pod admission denied" podUID="ba70edb2-226d-4758-870f-3f7203abc107" pod="tigera-operator/tigera-operator-747864d56d-2trmv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:57.393545 kubelet[2774]: I0813 01:14:57.393475 2774 kubelet.go:2351] "Pod admission denied" podUID="0b03d357-3a11-4cf3-8c79-535210c980a8" pod="tigera-operator/tigera-operator-747864d56d-gv7pd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:57.495956 kubelet[2774]: I0813 01:14:57.495864 2774 kubelet.go:2351] "Pod admission denied" podUID="fde851e6-41db-4fea-97ac-33242953b9fa" pod="tigera-operator/tigera-operator-747864d56d-brzkk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:57.505830 containerd[1543]: time="2025-08-13T01:14:57.505553018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:57.598210 kubelet[2774]: I0813 01:14:57.598079 2774 kubelet.go:2351] "Pod admission denied" podUID="68161677-c901-4582-923e-f3449c8af759" pod="tigera-operator/tigera-operator-747864d56d-5qh4n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:57.602200 containerd[1543]: time="2025-08-13T01:14:57.602147227Z" level=error msg="Failed to destroy network for sandbox \"0e14cd38354fe8da6817c3f470e9d1f3e91bd7f8a07df830dc5a056bd337e73c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:57.605988 containerd[1543]: time="2025-08-13T01:14:57.605866441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e14cd38354fe8da6817c3f470e9d1f3e91bd7f8a07df830dc5a056bd337e73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:57.606835 kubelet[2774]: E0813 01:14:57.606687 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e14cd38354fe8da6817c3f470e9d1f3e91bd7f8a07df830dc5a056bd337e73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:57.606835 kubelet[2774]: E0813 01:14:57.606769 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e14cd38354fe8da6817c3f470e9d1f3e91bd7f8a07df830dc5a056bd337e73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:57.606867 systemd[1]: run-netns-cni\x2ddd3c646f\x2de558\x2d38f1\x2db664\x2d8b4cc467a928.mount: Deactivated successfully. Aug 13 01:14:57.607515 kubelet[2774]: E0813 01:14:57.607039 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e14cd38354fe8da6817c3f470e9d1f3e91bd7f8a07df830dc5a056bd337e73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:14:57.607515 kubelet[2774]: E0813 01:14:57.607198 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e14cd38354fe8da6817c3f470e9d1f3e91bd7f8a07df830dc5a056bd337e73c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" Aug 13 01:14:57.697795 kubelet[2774]: I0813 01:14:57.697508 2774 kubelet.go:2351] "Pod admission denied" podUID="4c01fafe-48ea-4691-9bc5-bbc0ab1049b2" pod="tigera-operator/tigera-operator-747864d56d-944ql" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:57.797006 kubelet[2774]: I0813 01:14:57.796925 2774 kubelet.go:2351] "Pod admission denied" podUID="afb8dcb6-bf48-454e-a100-503c8979a0a2" pod="tigera-operator/tigera-operator-747864d56d-rvj5z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:57.903805 kubelet[2774]: I0813 01:14:57.903635 2774 kubelet.go:2351] "Pod admission denied" podUID="096f3eed-1805-48cf-a066-cdab38766182" pod="tigera-operator/tigera-operator-747864d56d-t87vk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:57.993824 kubelet[2774]: I0813 01:14:57.993754 2774 kubelet.go:2351] "Pod admission denied" podUID="2bc7a8f3-1ec3-4c71-8aad-044be62abd4d" pod="tigera-operator/tigera-operator-747864d56d-8sdhr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:58.094780 kubelet[2774]: I0813 01:14:58.094520 2774 kubelet.go:2351] "Pod admission denied" podUID="56c03169-05d9-4997-a0b8-7404f2b542c8" pod="tigera-operator/tigera-operator-747864d56d-cq6lm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:58.195306 kubelet[2774]: I0813 01:14:58.195235 2774 kubelet.go:2351] "Pod admission denied" podUID="7f422fa1-6823-4144-895e-ffea2af57150" pod="tigera-operator/tigera-operator-747864d56d-wd4qn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:58.248525 kubelet[2774]: I0813 01:14:58.248447 2774 kubelet.go:2351] "Pod admission denied" podUID="909fd1f0-6e71-4797-b85c-2bfada4b104f" pod="tigera-operator/tigera-operator-747864d56d-86srr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:58.343768 kubelet[2774]: I0813 01:14:58.343688 2774 kubelet.go:2351] "Pod admission denied" podUID="06ea6f46-e2e4-428c-a385-678b776fd0d7" pod="tigera-operator/tigera-operator-747864d56d-x2jth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:58.545997 kubelet[2774]: I0813 01:14:58.545805 2774 kubelet.go:2351] "Pod admission denied" podUID="1a7369e7-bc90-4e85-a56b-1924c86da3a4" pod="tigera-operator/tigera-operator-747864d56d-b75r6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:58.660295 kubelet[2774]: I0813 01:14:58.658517 2774 kubelet.go:2351] "Pod admission denied" podUID="a2e08ea1-660f-4292-af0a-a1cf69680355" pod="tigera-operator/tigera-operator-747864d56d-q5sc6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:58.746239 kubelet[2774]: I0813 01:14:58.746177 2774 kubelet.go:2351] "Pod admission denied" podUID="8a00133f-7ad6-4beb-b99e-6b643d48f6e1" pod="tigera-operator/tigera-operator-747864d56d-t8jr4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:58.845712 kubelet[2774]: I0813 01:14:58.845045 2774 kubelet.go:2351] "Pod admission denied" podUID="a0694194-2e52-420e-a96b-4761a138344e" pod="tigera-operator/tigera-operator-747864d56d-z6jkl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:58.893126 kubelet[2774]: I0813 01:14:58.893061 2774 kubelet.go:2351] "Pod admission denied" podUID="fdbead07-7c47-42b0-ac5f-6f7e4191580e" pod="tigera-operator/tigera-operator-747864d56d-hg4kx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:58.996104 kubelet[2774]: I0813 01:14:58.996039 2774 kubelet.go:2351] "Pod admission denied" podUID="7671e37c-cb8b-4e75-8c58-606ac2e46cbd" pod="tigera-operator/tigera-operator-747864d56d-tqh88" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:59.100855 kubelet[2774]: I0813 01:14:59.100225 2774 kubelet.go:2351] "Pod admission denied" podUID="107df0da-d265-4b8f-b6fc-fa6363c749e9" pod="tigera-operator/tigera-operator-747864d56d-jt9dc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:59.200723 kubelet[2774]: I0813 01:14:59.200643 2774 kubelet.go:2351] "Pod admission denied" podUID="06fd74c7-787c-4a64-832c-2bfa3d09ecad" pod="tigera-operator/tigera-operator-747864d56d-48whq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:59.310894 kubelet[2774]: I0813 01:14:59.310582 2774 kubelet.go:2351] "Pod admission denied" podUID="fbeba56f-5d56-4810-88b7-f73f931d6fc5" pod="tigera-operator/tigera-operator-747864d56d-jx9dj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:59.393770 kubelet[2774]: I0813 01:14:59.393567 2774 kubelet.go:2351] "Pod admission denied" podUID="099b721c-f5ab-49fa-91c1-a832e0c70464" pod="tigera-operator/tigera-operator-747864d56d-5zrt7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:59.505745 kubelet[2774]: E0813 01:14:59.505666 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount929401159: write /var/lib/containerd/tmpmounts/containerd-mount929401159/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-9bc9j" podUID="0909eaba-89ba-4b02-b2f3-a17e3b6e2afc" Aug 13 01:14:59.598251 kubelet[2774]: I0813 01:14:59.598174 2774 kubelet.go:2351] "Pod admission denied" podUID="c4912d5e-e0e4-4bc4-950b-2cc738052fc2" pod="tigera-operator/tigera-operator-747864d56d-vwkft" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:59.698097 kubelet[2774]: I0813 01:14:59.698007 2774 kubelet.go:2351] "Pod admission denied" podUID="97204c66-be2b-4d8a-bf55-33b5bbe8c104" pod="tigera-operator/tigera-operator-747864d56d-4v9wh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:59.758642 kubelet[2774]: I0813 01:14:59.758568 2774 kubelet.go:2351] "Pod admission denied" podUID="36bc4df1-df2d-4178-a187-ab2dea65d60a" pod="tigera-operator/tigera-operator-747864d56d-5xh8h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:59.850821 kubelet[2774]: I0813 01:14:59.850733 2774 kubelet.go:2351] "Pod admission denied" podUID="170e79d2-6252-4cd5-940e-1311d96cc050" pod="tigera-operator/tigera-operator-747864d56d-2mrrn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:59.949523 kubelet[2774]: I0813 01:14:59.949098 2774 kubelet.go:2351] "Pod admission denied" podUID="219923ae-6253-43a7-a2ff-269174111481" pod="tigera-operator/tigera-operator-747864d56d-bpvx2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.050404 kubelet[2774]: I0813 01:15:00.050319 2774 kubelet.go:2351] "Pod admission denied" podUID="b0ffb1a7-49e5-4e58-8117-5a3a5ae7d769" pod="tigera-operator/tigera-operator-747864d56d-8wz48" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.150241 kubelet[2774]: I0813 01:15:00.150157 2774 kubelet.go:2351] "Pod admission denied" podUID="ffec81ba-627f-4071-a678-b2557522a2b3" pod="tigera-operator/tigera-operator-747864d56d-7qm6d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.250493 kubelet[2774]: I0813 01:15:00.250297 2774 kubelet.go:2351] "Pod admission denied" podUID="fdf2bb7d-2f30-4e79-8df6-926a1146af67" pod="tigera-operator/tigera-operator-747864d56d-ddm7h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.349109 kubelet[2774]: I0813 01:15:00.349033 2774 kubelet.go:2351] "Pod admission denied" podUID="60f668d0-0251-42df-a814-24b3606f630b" pod="tigera-operator/tigera-operator-747864d56d-t8x5n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.448473 kubelet[2774]: I0813 01:15:00.448379 2774 kubelet.go:2351] "Pod admission denied" podUID="dff34aee-395a-4341-bf27-357e8ba17232" pod="tigera-operator/tigera-operator-747864d56d-qzh2l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.505569 containerd[1543]: time="2025-08-13T01:15:00.505234505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:00.554063 kubelet[2774]: I0813 01:15:00.553944 2774 kubelet.go:2351] "Pod admission denied" podUID="b2a8cfb5-7663-45fc-b6d1-3fe79bf99875" pod="tigera-operator/tigera-operator-747864d56d-8xdx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.584989 containerd[1543]: time="2025-08-13T01:15:00.584845167Z" level=error msg="Failed to destroy network for sandbox \"80a2c2cee97e614c70332a6a29f8c0983bcb6fd4b9f099ff3a511d5fc47236c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:00.587267 systemd[1]: run-netns-cni\x2df069512b\x2d5826\x2d2586\x2d616a\x2da13db51d915f.mount: Deactivated successfully. Aug 13 01:15:00.589685 containerd[1543]: time="2025-08-13T01:15:00.589415147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a2c2cee97e614c70332a6a29f8c0983bcb6fd4b9f099ff3a511d5fc47236c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:00.591652 kubelet[2774]: E0813 01:15:00.591585 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a2c2cee97e614c70332a6a29f8c0983bcb6fd4b9f099ff3a511d5fc47236c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:00.591755 kubelet[2774]: E0813 01:15:00.591660 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a2c2cee97e614c70332a6a29f8c0983bcb6fd4b9f099ff3a511d5fc47236c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:15:00.591755 kubelet[2774]: E0813 01:15:00.591683 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a2c2cee97e614c70332a6a29f8c0983bcb6fd4b9f099ff3a511d5fc47236c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:15:00.591755 kubelet[2774]: E0813 01:15:00.591723 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80a2c2cee97e614c70332a6a29f8c0983bcb6fd4b9f099ff3a511d5fc47236c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:15:00.609637 kubelet[2774]: I0813 01:15:00.609557 2774 kubelet.go:2351] "Pod admission denied" podUID="62e302aa-37cf-45d0-8c73-a175f90cb3d8" pod="tigera-operator/tigera-operator-747864d56d-q68c9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.694371 kubelet[2774]: I0813 01:15:00.694287 2774 kubelet.go:2351] "Pod admission denied" podUID="652614b1-91c0-470a-9895-b502a0b88c6b" pod="tigera-operator/tigera-operator-747864d56d-n787r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.799879 kubelet[2774]: I0813 01:15:00.799231 2774 kubelet.go:2351] "Pod admission denied" podUID="5e4c87c9-b4cd-4a1a-8d4d-c993fffd6f79" pod="tigera-operator/tigera-operator-747864d56d-q8465" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.847378 kubelet[2774]: I0813 01:15:00.847292 2774 kubelet.go:2351] "Pod admission denied" podUID="fc62d49a-a1fd-4e4d-9262-1aed9b37c084" pod="tigera-operator/tigera-operator-747864d56d-j7nv9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:00.943506 kubelet[2774]: I0813 01:15:00.943232 2774 kubelet.go:2351] "Pod admission denied" podUID="345844d4-8e99-4840-968e-935362a22122" pod="tigera-operator/tigera-operator-747864d56d-jfjtt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.055437 kubelet[2774]: I0813 01:15:01.054328 2774 kubelet.go:2351] "Pod admission denied" podUID="cc5d7a35-c90c-4706-9cba-6fa89f96f132" pod="tigera-operator/tigera-operator-747864d56d-h2jwx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.093949 kubelet[2774]: I0813 01:15:01.093892 2774 kubelet.go:2351] "Pod admission denied" podUID="567be206-b55f-4934-91ec-514dfaa3bf92" pod="tigera-operator/tigera-operator-747864d56d-tdt8t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.196451 kubelet[2774]: I0813 01:15:01.196001 2774 kubelet.go:2351] "Pod admission denied" podUID="4b9aaaa0-0701-410c-872e-ff0b06a2fc10" pod="tigera-operator/tigera-operator-747864d56d-22fsg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.300416 kubelet[2774]: I0813 01:15:01.300290 2774 kubelet.go:2351] "Pod admission denied" podUID="9dadfe22-b5d3-46f3-acca-e5eb22555891" pod="tigera-operator/tigera-operator-747864d56d-2mdlj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.404157 kubelet[2774]: I0813 01:15:01.403955 2774 kubelet.go:2351] "Pod admission denied" podUID="ba3fd368-7cfc-4588-ac6d-e937d57c686b" pod="tigera-operator/tigera-operator-747864d56d-sjd7x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.500534 kubelet[2774]: I0813 01:15:01.500458 2774 kubelet.go:2351] "Pod admission denied" podUID="0bfa37b2-4ce6-41e7-b6ce-7b067fb63e10" pod="tigera-operator/tigera-operator-747864d56d-dqq7j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.504373 kubelet[2774]: E0813 01:15:01.504270 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:01.506237 containerd[1543]: time="2025-08-13T01:15:01.506177435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:01.581471 containerd[1543]: time="2025-08-13T01:15:01.581390800Z" level=error msg="Failed to destroy network for sandbox \"3372b87f81e6d6189e554eeeb380fb576f6cb53de452fc04aac297f8294f74b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:01.584116 containerd[1543]: time="2025-08-13T01:15:01.582957923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3372b87f81e6d6189e554eeeb380fb576f6cb53de452fc04aac297f8294f74b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:01.585396 kubelet[2774]: E0813 01:15:01.584373 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3372b87f81e6d6189e554eeeb380fb576f6cb53de452fc04aac297f8294f74b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:01.586298 systemd[1]: run-netns-cni\x2df91753d0\x2dc975\x2d68d5\x2d946e\x2db9f7b9dd7c7f.mount: Deactivated successfully. Aug 13 01:15:01.587167 kubelet[2774]: E0813 01:15:01.586953 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3372b87f81e6d6189e554eeeb380fb576f6cb53de452fc04aac297f8294f74b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:15:01.587167 kubelet[2774]: E0813 01:15:01.586997 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3372b87f81e6d6189e554eeeb380fb576f6cb53de452fc04aac297f8294f74b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:15:01.587167 kubelet[2774]: E0813 01:15:01.587057 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3372b87f81e6d6189e554eeeb380fb576f6cb53de452fc04aac297f8294f74b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:15:01.606747 kubelet[2774]: I0813 01:15:01.606681 2774 kubelet.go:2351] "Pod admission denied" podUID="0a0da5f4-fba4-4839-8c17-0f03b7e8768a" pod="tigera-operator/tigera-operator-747864d56d-cjwlq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.704377 kubelet[2774]: I0813 01:15:01.704284 2774 kubelet.go:2351] "Pod admission denied" podUID="a0625aaf-9a01-415a-bd63-a7fe1db8e8e0" pod="tigera-operator/tigera-operator-747864d56d-wbphn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.800169 kubelet[2774]: I0813 01:15:01.800107 2774 kubelet.go:2351] "Pod admission denied" podUID="43c52abe-55e9-40e0-ad2d-9b601b256799" pod="tigera-operator/tigera-operator-747864d56d-dfqt9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.894085 kubelet[2774]: I0813 01:15:01.893645 2774 kubelet.go:2351] "Pod admission denied" podUID="d0f7bc2d-c0af-4023-b982-dea0f1c6098f" pod="tigera-operator/tigera-operator-747864d56d-vzwq2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:01.996076 kubelet[2774]: I0813 01:15:01.995913 2774 kubelet.go:2351] "Pod admission denied" podUID="5c5d9a14-e072-4613-b219-aaa4545f23a2" pod="tigera-operator/tigera-operator-747864d56d-ckzzm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:02.095010 kubelet[2774]: I0813 01:15:02.094929 2774 kubelet.go:2351] "Pod admission denied" podUID="a6de2f67-1c85-4844-a7bd-a2d5ca5f9bb1" pod="tigera-operator/tigera-operator-747864d56d-trhfh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:02.198817 kubelet[2774]: I0813 01:15:02.198751 2774 kubelet.go:2351] "Pod admission denied" podUID="2c37efd9-01bc-4eb2-8f75-8193d2098984" pod="tigera-operator/tigera-operator-747864d56d-q6lkj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:02.298627 kubelet[2774]: I0813 01:15:02.298440 2774 kubelet.go:2351] "Pod admission denied" podUID="0c204fd0-3274-4561-8529-20cce465c59c" pod="tigera-operator/tigera-operator-747864d56d-t7m6p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:02.350851 kubelet[2774]: I0813 01:15:02.350791 2774 kubelet.go:2351] "Pod admission denied" podUID="8485e8ac-1b8a-4f07-8e61-107f55776be0" pod="tigera-operator/tigera-operator-747864d56d-trpsr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:02.445827 kubelet[2774]: I0813 01:15:02.445741 2774 kubelet.go:2351] "Pod admission denied" podUID="fabaa641-2244-44e6-9dee-2c54c202d02a" pod="tigera-operator/tigera-operator-747864d56d-khkzz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:02.562658 kubelet[2774]: I0813 01:15:02.562523 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:02.562658 kubelet[2774]: I0813 01:15:02.562567 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:02.565789 kubelet[2774]: I0813 01:15:02.565704 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:15:02.568534 kubelet[2774]: I0813 01:15:02.568507 2774 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 01:15:02.570448 containerd[1543]: time="2025-08-13T01:15:02.569342658Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:15:02.571899 containerd[1543]: time="2025-08-13T01:15:02.571793079Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:15:02.572740 containerd[1543]: time="2025-08-13T01:15:02.572692067Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 01:15:02.573458 containerd[1543]: time="2025-08-13T01:15:02.573400863Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 01:15:02.573458 containerd[1543]: time="2025-08-13T01:15:02.573449143Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:15:02.573982 kubelet[2774]: I0813 01:15:02.573923 2774 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" Aug 13 01:15:02.574882 containerd[1543]: time="2025-08-13T01:15:02.574746355Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:15:02.575715 containerd[1543]: time="2025-08-13T01:15:02.575689443Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 01:15:02.576307 containerd[1543]: time="2025-08-13T01:15:02.576284888Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" Aug 13 01:15:02.576840 containerd[1543]: time="2025-08-13T01:15:02.576803152Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" Aug 13 01:15:02.576887 containerd[1543]: time="2025-08-13T01:15:02.576873023Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:15:02.589236 kubelet[2774]: I0813 01:15:02.589187 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:02.589514 kubelet[2774]: I0813 01:15:02.589271 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-wp8jf","kube-system/coredns-668d6bf9bc-qnwfr","calico-system/calico-kube-controllers-c5f875d88-fvdcx","calico-system/csi-node-driver-bc7dg","calico-system/calico-node-9bc9j","calico-system/calico-typha-bf7d6589c-n48dd","kube-system/kube-controller-manager-172-234-199-8","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8"] Aug 13 01:15:02.589514 kubelet[2774]: E0813 01:15:02.589304 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:15:02.589514 kubelet[2774]: E0813 01:15:02.589316 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:15:02.589514 kubelet[2774]: E0813 01:15:02.589323 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:15:02.589514 kubelet[2774]: E0813 01:15:02.589330 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:15:02.589514 kubelet[2774]: E0813 01:15:02.589338 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-9bc9j" Aug 13 01:15:02.589514 kubelet[2774]: E0813 01:15:02.589467 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf7d6589c-n48dd" Aug 13 01:15:02.589514 kubelet[2774]: E0813 01:15:02.589493 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:15:02.589514 kubelet[2774]: E0813 01:15:02.589507 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8p8zx" Aug 13 01:15:02.589514 kubelet[2774]: E0813 01:15:02.589516 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:15:02.589831 kubelet[2774]: E0813 01:15:02.589542 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:15:02.589831 kubelet[2774]: I0813 01:15:02.589552 2774 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:15:02.645745 kubelet[2774]: I0813 01:15:02.645668 2774 kubelet.go:2351] "Pod admission denied" podUID="383dbf6d-2037-4606-80d3-16e5add90eee" pod="tigera-operator/tigera-operator-747864d56d-hsbdq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:02.746513 kubelet[2774]: I0813 01:15:02.746416 2774 kubelet.go:2351] "Pod admission denied" podUID="8a6e537a-24a2-4078-b28c-245c90c14ce3" pod="tigera-operator/tigera-operator-747864d56d-bqhwb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:02.852512 kubelet[2774]: I0813 01:15:02.851742 2774 kubelet.go:2351] "Pod admission denied" podUID="d3099802-1163-454d-a022-f719e7da8aeb" pod="tigera-operator/tigera-operator-747864d56d-r4vwt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:02.964105 kubelet[2774]: I0813 01:15:02.964035 2774 kubelet.go:2351] "Pod admission denied" podUID="18578945-58ae-461a-b294-acfc61d43fd6" pod="tigera-operator/tigera-operator-747864d56d-vmskh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:03.097742 kubelet[2774]: I0813 01:15:03.097661 2774 kubelet.go:2351] "Pod admission denied" podUID="075c9388-0727-441e-ae99-699b04e3e4ff" pod="tigera-operator/tigera-operator-747864d56d-6smll" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:03.199399 kubelet[2774]: I0813 01:15:03.199305 2774 kubelet.go:2351] "Pod admission denied" podUID="9d599479-3d55-4882-94a7-7a855034e9ea" pod="tigera-operator/tigera-operator-747864d56d-hsgb9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:03.302629 kubelet[2774]: I0813 01:15:03.302542 2774 kubelet.go:2351] "Pod admission denied" podUID="70c1e2c6-49ba-4ff3-aba3-7bdb54cdad6a" pod="tigera-operator/tigera-operator-747864d56d-56zq5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:03.401821 kubelet[2774]: I0813 01:15:03.401738 2774 kubelet.go:2351] "Pod admission denied" podUID="8bdf98ef-c2ce-4bf8-ac92-db7178d1d987" pod="tigera-operator/tigera-operator-747864d56d-sf4q2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:03.508732 kubelet[2774]: I0813 01:15:03.506383 2774 kubelet.go:2351] "Pod admission denied" podUID="08ec2a5b-55ba-4042-800f-f9578184d934" pod="tigera-operator/tigera-operator-747864d56d-tqsdt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:03.595425 kubelet[2774]: I0813 01:15:03.595344 2774 kubelet.go:2351] "Pod admission denied" podUID="55b04086-9e4d-4b15-8d1d-3af5313b9880" pod="tigera-operator/tigera-operator-747864d56d-7v9n9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:03.698056 kubelet[2774]: I0813 01:15:03.697934 2774 kubelet.go:2351] "Pod admission denied" podUID="23e01cd0-9e4f-483c-9af6-edd46a04b002" pod="tigera-operator/tigera-operator-747864d56d-h5dqm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:03.800094 kubelet[2774]: I0813 01:15:03.799920 2774 kubelet.go:2351] "Pod admission denied" podUID="96ff1cee-d4b3-4f1d-bd39-bcc57a4d65d4" pod="tigera-operator/tigera-operator-747864d56d-ncgfb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:04.020644 kubelet[2774]: I0813 01:15:04.019898 2774 kubelet.go:2351] "Pod admission denied" podUID="d21609a4-a269-4fdd-8d0f-da0e3ca5e98c" pod="tigera-operator/tigera-operator-747864d56d-7ng5k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:04.107623 kubelet[2774]: I0813 01:15:04.107448 2774 kubelet.go:2351] "Pod admission denied" podUID="198ee837-0b1a-4091-863e-f81f83d5f86b" pod="tigera-operator/tigera-operator-747864d56d-7f246" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:04.244586 kubelet[2774]: I0813 01:15:04.244505 2774 kubelet.go:2351] "Pod admission denied" podUID="61b0ae64-596a-4326-bae2-82d8f3ab3b1b" pod="tigera-operator/tigera-operator-747864d56d-2xxx8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:04.360662 kubelet[2774]: I0813 01:15:04.359913 2774 kubelet.go:2351] "Pod admission denied" podUID="fe6d80bc-9a41-4e03-902c-3169316e46aa" pod="tigera-operator/tigera-operator-747864d56d-gmgfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:04.449022 kubelet[2774]: I0813 01:15:04.448948 2774 kubelet.go:2351] "Pod admission denied" podUID="c02cdc76-5b70-434e-956a-e76622ba0820" pod="tigera-operator/tigera-operator-747864d56d-5l4nl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:04.546237 kubelet[2774]: I0813 01:15:04.546161 2774 kubelet.go:2351] "Pod admission denied" podUID="828ee9db-537a-4b96-a5fb-3b1174ada2f6" pod="tigera-operator/tigera-operator-747864d56d-frz4l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:04.645641 kubelet[2774]: I0813 01:15:04.645413 2774 kubelet.go:2351] "Pod admission denied" podUID="0345d12d-ea1b-4444-b3c9-5d8f873f206a" pod="tigera-operator/tigera-operator-747864d56d-sw7dx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:04.745025 kubelet[2774]: I0813 01:15:04.744953 2774 kubelet.go:2351] "Pod admission denied" podUID="7e01ef7a-90db-4cae-ba92-ccdf4c0b451f" pod="tigera-operator/tigera-operator-747864d56d-6pg2n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:04.844823 kubelet[2774]: I0813 01:15:04.844764 2774 kubelet.go:2351] "Pod admission denied" podUID="0ccd82c2-54a9-471c-8995-254f90337620" pod="tigera-operator/tigera-operator-747864d56d-rrgqp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:04.947664 kubelet[2774]: I0813 01:15:04.947563 2774 kubelet.go:2351] "Pod admission denied" podUID="f6d60489-0ccb-48a0-b9df-2c9b7daf35fb" pod="tigera-operator/tigera-operator-747864d56d-h8fcc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:05.053378 kubelet[2774]: I0813 01:15:05.052854 2774 kubelet.go:2351] "Pod admission denied" podUID="c17d2417-6829-4ba1-ba2c-c39de48b192d" pod="tigera-operator/tigera-operator-747864d56d-fzdlm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:05.146305 kubelet[2774]: I0813 01:15:05.146230 2774 kubelet.go:2351] "Pod admission denied" podUID="0994d768-5ac2-4e65-945e-7308480b8064" pod="tigera-operator/tigera-operator-747864d56d-97hw5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:05.246248 kubelet[2774]: I0813 01:15:05.246074 2774 kubelet.go:2351] "Pod admission denied" podUID="85b74616-86af-414f-8a9d-560789270851" pod="tigera-operator/tigera-operator-747864d56d-6r2st" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:05.295112 kubelet[2774]: I0813 01:15:05.295045 2774 kubelet.go:2351] "Pod admission denied" podUID="5a4eb7d4-eb39-4f2c-8def-0f128d63a7e0" pod="tigera-operator/tigera-operator-747864d56d-wvqzc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:05.401340 kubelet[2774]: I0813 01:15:05.401250 2774 kubelet.go:2351] "Pod admission denied" podUID="23ac5d8e-aba4-4ac3-a1ff-5e959f1a29d0" pod="tigera-operator/tigera-operator-747864d56d-jljkr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:15:06.540325 systemd[1]: Started sshd@9-172.234.199.8:22-147.75.109.163:57944.service - OpenSSH per-connection server daemon (147.75.109.163:57944). Aug 13 01:15:06.901869 sshd[4705]: Accepted publickey for core from 147.75.109.163 port 57944 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:06.904872 sshd-session[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:06.911062 systemd-logind[1515]: New session 10 of user core. Aug 13 01:15:06.926712 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:15:07.221398 sshd[4707]: Connection closed by 147.75.109.163 port 57944 Aug 13 01:15:07.222257 sshd-session[4705]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:07.228237 systemd[1]: sshd@9-172.234.199.8:22-147.75.109.163:57944.service: Deactivated successfully. Aug 13 01:15:07.231635 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:15:07.232749 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:15:07.235000 systemd-logind[1515]: Removed session 10. Aug 13 01:15:08.504384 kubelet[2774]: E0813 01:15:08.504150 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:08.506698 containerd[1543]: time="2025-08-13T01:15:08.506120609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:08.574391 containerd[1543]: time="2025-08-13T01:15:08.574290900Z" level=error msg="Failed to destroy network for sandbox \"1fa73097abf2f2e2ccaef23b0d01e610de14cf43a31cee54ab1294b9677567bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:08.576910 systemd[1]: run-netns-cni\x2d98ac1205\x2dc3c3\x2d2b3e\x2dcc4e\x2d1f9a16907d7e.mount: Deactivated successfully. Aug 13 01:15:08.579084 containerd[1543]: time="2025-08-13T01:15:08.578282081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa73097abf2f2e2ccaef23b0d01e610de14cf43a31cee54ab1294b9677567bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:08.579178 kubelet[2774]: E0813 01:15:08.578903 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa73097abf2f2e2ccaef23b0d01e610de14cf43a31cee54ab1294b9677567bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:08.579178 kubelet[2774]: E0813 01:15:08.579060 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa73097abf2f2e2ccaef23b0d01e610de14cf43a31cee54ab1294b9677567bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:15:08.579178 kubelet[2774]: E0813 01:15:08.579088 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa73097abf2f2e2ccaef23b0d01e610de14cf43a31cee54ab1294b9677567bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:15:08.580429 kubelet[2774]: E0813 01:15:08.579162 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fa73097abf2f2e2ccaef23b0d01e610de14cf43a31cee54ab1294b9677567bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wp8jf" podUID="56ce87f4-747f-470b-8388-a8400bdda009" Aug 13 01:15:11.504521 containerd[1543]: time="2025-08-13T01:15:11.504442941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:11.505019 containerd[1543]: time="2025-08-13T01:15:11.504443111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:11.579521 containerd[1543]: time="2025-08-13T01:15:11.578983082Z" level=error msg="Failed to destroy network for sandbox \"924d26c28f7f6e295a699679468daa2e5d0f94e5a43e5117a401f7f324193127\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:11.581647 containerd[1543]: time="2025-08-13T01:15:11.581599372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"924d26c28f7f6e295a699679468daa2e5d0f94e5a43e5117a401f7f324193127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:11.582716 kubelet[2774]: E0813 01:15:11.582194 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"924d26c28f7f6e295a699679468daa2e5d0f94e5a43e5117a401f7f324193127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:11.582716 kubelet[2774]: E0813 01:15:11.582278 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"924d26c28f7f6e295a699679468daa2e5d0f94e5a43e5117a401f7f324193127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:15:11.582716 kubelet[2774]: E0813 01:15:11.582313 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"924d26c28f7f6e295a699679468daa2e5d0f94e5a43e5117a401f7f324193127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:15:11.582716 kubelet[2774]: E0813 01:15:11.582402 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"924d26c28f7f6e295a699679468daa2e5d0f94e5a43e5117a401f7f324193127\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:15:11.583632 systemd[1]: run-netns-cni\x2dbda22f4f\x2d3678\x2d8625\x2d947d\x2d3a98f74f4825.mount: Deactivated successfully. Aug 13 01:15:11.587504 containerd[1543]: time="2025-08-13T01:15:11.587444327Z" level=error msg="Failed to destroy network for sandbox \"fba91ade22696a2cbefffdb4659065770545acb1774c09179c5214e30c7f2bbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:11.591083 systemd[1]: run-netns-cni\x2d46065923\x2df814\x2d821f\x2d3078\x2d7e3ae4042f78.mount: Deactivated successfully. Aug 13 01:15:11.592345 containerd[1543]: time="2025-08-13T01:15:11.592304235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba91ade22696a2cbefffdb4659065770545acb1774c09179c5214e30c7f2bbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:11.593119 kubelet[2774]: E0813 01:15:11.592981 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba91ade22696a2cbefffdb4659065770545acb1774c09179c5214e30c7f2bbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:11.593262 kubelet[2774]: E0813 01:15:11.593091 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba91ade22696a2cbefffdb4659065770545acb1774c09179c5214e30c7f2bbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:15:11.593457 kubelet[2774]: E0813 01:15:11.593347 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fba91ade22696a2cbefffdb4659065770545acb1774c09179c5214e30c7f2bbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:15:11.593596 kubelet[2774]: E0813 01:15:11.593563 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fba91ade22696a2cbefffdb4659065770545acb1774c09179c5214e30c7f2bbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" Aug 13 01:15:12.286030 systemd[1]: Started sshd@10-172.234.199.8:22-147.75.109.163:60666.service - OpenSSH per-connection server daemon (147.75.109.163:60666). Aug 13 01:15:12.636448 sshd[4805]: Accepted publickey for core from 147.75.109.163 port 60666 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:12.638155 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:12.644386 systemd-logind[1515]: New session 11 of user core. Aug 13 01:15:12.653528 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:15:12.965128 sshd[4807]: Connection closed by 147.75.109.163 port 60666 Aug 13 01:15:12.966010 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:12.972754 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:15:12.973988 systemd[1]: sshd@10-172.234.199.8:22-147.75.109.163:60666.service: Deactivated successfully. Aug 13 01:15:12.977289 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:15:12.980114 systemd-logind[1515]: Removed session 11. Aug 13 01:15:14.505648 containerd[1543]: time="2025-08-13T01:15:14.505568095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:15:16.509707 kubelet[2774]: E0813 01:15:16.509666 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:16.512250 containerd[1543]: time="2025-08-13T01:15:16.511716779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:16.599419 containerd[1543]: time="2025-08-13T01:15:16.599320105Z" level=error msg="Failed to destroy network for sandbox \"76d82e230182b64e89d4c9c810d39820ff4f1830b7cc24c0058ddefc770f2095\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:16.604634 systemd[1]: run-netns-cni\x2d427bae78\x2dcd40\x2dffd2\x2dc899\x2dc56d25547e28.mount: Deactivated successfully. Aug 13 01:15:16.607381 containerd[1543]: time="2025-08-13T01:15:16.606773229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d82e230182b64e89d4c9c810d39820ff4f1830b7cc24c0058ddefc770f2095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:16.610103 kubelet[2774]: E0813 01:15:16.609563 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d82e230182b64e89d4c9c810d39820ff4f1830b7cc24c0058ddefc770f2095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:16.610362 kubelet[2774]: E0813 01:15:16.610080 2774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d82e230182b64e89d4c9c810d39820ff4f1830b7cc24c0058ddefc770f2095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:15:16.610362 kubelet[2774]: E0813 01:15:16.610225 2774 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d82e230182b64e89d4c9c810d39820ff4f1830b7cc24c0058ddefc770f2095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:15:16.610489 kubelet[2774]: E0813 01:15:16.610463 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76d82e230182b64e89d4c9c810d39820ff4f1830b7cc24c0058ddefc770f2095\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:15:18.033614 systemd[1]: Started sshd@11-172.234.199.8:22-147.75.109.163:60682.service - OpenSSH per-connection server daemon (147.75.109.163:60682). Aug 13 01:15:18.383241 sshd[4852]: Accepted publickey for core from 147.75.109.163 port 60682 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:18.385372 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:18.393794 systemd-logind[1515]: New session 12 of user core. Aug 13 01:15:18.400536 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:15:18.493091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149615367.mount: Deactivated successfully. Aug 13 01:15:18.521676 containerd[1543]: time="2025-08-13T01:15:18.521609809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:18.522673 containerd[1543]: time="2025-08-13T01:15:18.522520116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:15:18.523138 containerd[1543]: time="2025-08-13T01:15:18.523105920Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:18.524550 containerd[1543]: time="2025-08-13T01:15:18.524517870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:18.525245 containerd[1543]: time="2025-08-13T01:15:18.525219695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.01960414s" Aug 13 01:15:18.525464 containerd[1543]: time="2025-08-13T01:15:18.525325296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:15:18.542834 containerd[1543]: time="2025-08-13T01:15:18.542784950Z" level=info msg="CreateContainer within sandbox \"76ae3444196225a303323f3547299db8a8e29579071416116c7a062f7244cd3f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:15:18.551117 containerd[1543]: time="2025-08-13T01:15:18.551083729Z" level=info msg="Container b1fd6ba154f501aeae79f3ad39f14eac062a2010a34c066c83e2ce20e677260b: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:18.564850 containerd[1543]: time="2025-08-13T01:15:18.564788087Z" level=info msg="CreateContainer within sandbox \"76ae3444196225a303323f3547299db8a8e29579071416116c7a062f7244cd3f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b1fd6ba154f501aeae79f3ad39f14eac062a2010a34c066c83e2ce20e677260b\"" Aug 13 01:15:18.565587 containerd[1543]: time="2025-08-13T01:15:18.565534132Z" level=info msg="StartContainer for \"b1fd6ba154f501aeae79f3ad39f14eac062a2010a34c066c83e2ce20e677260b\"" Aug 13 01:15:18.567150 containerd[1543]: time="2025-08-13T01:15:18.567119753Z" level=info msg="connecting to shim b1fd6ba154f501aeae79f3ad39f14eac062a2010a34c066c83e2ce20e677260b" address="unix:///run/containerd/s/cf9b7ff19b55ec6486c0acc44bf3240394cd63a1845dc6c2022ea4b685d3e5e0" protocol=ttrpc version=3 Aug 13 01:15:18.592604 systemd[1]: Started cri-containerd-b1fd6ba154f501aeae79f3ad39f14eac062a2010a34c066c83e2ce20e677260b.scope - libcontainer container b1fd6ba154f501aeae79f3ad39f14eac062a2010a34c066c83e2ce20e677260b. Aug 13 01:15:18.677107 containerd[1543]: time="2025-08-13T01:15:18.677017684Z" level=info msg="StartContainer for \"b1fd6ba154f501aeae79f3ad39f14eac062a2010a34c066c83e2ce20e677260b\" returns successfully" Aug 13 01:15:18.730332 sshd[4854]: Connection closed by 147.75.109.163 port 60682 Aug 13 01:15:18.731203 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:18.737318 systemd[1]: sshd@11-172.234.199.8:22-147.75.109.163:60682.service: Deactivated successfully. Aug 13 01:15:18.740936 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:15:18.744972 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:15:18.746527 systemd-logind[1515]: Removed session 12. Aug 13 01:15:18.778377 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:15:18.778528 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:15:18.790620 systemd[1]: Started sshd@12-172.234.199.8:22-147.75.109.163:40660.service - OpenSSH per-connection server daemon (147.75.109.163:40660). Aug 13 01:15:19.044158 kubelet[2774]: I0813 01:15:19.043983 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9bc9j" podStartSLOduration=1.972241921 podStartE2EDuration="2m2.04392579s" podCreationTimestamp="2025-08-13 01:13:17 +0000 UTC" firstStartedPulling="2025-08-13 01:13:18.454392942 +0000 UTC m=+22.092068381" lastFinishedPulling="2025-08-13 01:15:18.526076821 +0000 UTC m=+142.163752250" observedRunningTime="2025-08-13 01:15:19.042182337 +0000 UTC m=+142.679857766" watchObservedRunningTime="2025-08-13 01:15:19.04392579 +0000 UTC m=+142.681601219" Aug 13 01:15:19.141725 containerd[1543]: time="2025-08-13T01:15:19.141625817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1fd6ba154f501aeae79f3ad39f14eac062a2010a34c066c83e2ce20e677260b\" id:\"f53f57604a6e678ee9f6c83b1318a04d7fb61ed64f38deae53515a464feafe9c\" pid:4932 exit_status:1 exited_at:{seconds:1755047719 nanos:141054953}" Aug 13 01:15:19.149560 sshd[4910]: Accepted publickey for core from 147.75.109.163 port 40660 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:19.151436 sshd-session[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:19.158029 systemd-logind[1515]: New session 13 of user core. Aug 13 01:15:19.163507 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:15:19.502402 sshd[4954]: Connection closed by 147.75.109.163 port 40660 Aug 13 01:15:19.503890 sshd-session[4910]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:19.510569 systemd[1]: sshd@12-172.234.199.8:22-147.75.109.163:40660.service: Deactivated successfully. Aug 13 01:15:19.513332 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:15:19.514701 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:15:19.516195 systemd-logind[1515]: Removed session 13. Aug 13 01:15:19.566918 systemd[1]: Started sshd@13-172.234.199.8:22-147.75.109.163:40670.service - OpenSSH per-connection server daemon (147.75.109.163:40670). Aug 13 01:15:19.918340 sshd[4964]: Accepted publickey for core from 147.75.109.163 port 40670 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:19.920594 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:19.928303 systemd-logind[1515]: New session 14 of user core. Aug 13 01:15:19.934667 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:15:20.113241 containerd[1543]: time="2025-08-13T01:15:20.113156449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1fd6ba154f501aeae79f3ad39f14eac062a2010a34c066c83e2ce20e677260b\" id:\"5b386d0ec60e69bb97931ab04578cbc65ca74c75597799bc832c689e3fa6e737\" pid:4981 exit_status:1 exited_at:{seconds:1755047720 nanos:112481024}" Aug 13 01:15:20.269981 sshd[4966]: Connection closed by 147.75.109.163 port 40670 Aug 13 01:15:20.271959 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:20.279600 systemd[1]: sshd@13-172.234.199.8:22-147.75.109.163:40670.service: Deactivated successfully. Aug 13 01:15:20.285270 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:15:20.292315 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:15:20.294807 systemd-logind[1515]: Removed session 14. Aug 13 01:15:20.835393 systemd-networkd[1463]: vxlan.calico: Link UP Aug 13 01:15:20.835422 systemd-networkd[1463]: vxlan.calico: Gained carrier Aug 13 01:15:22.475665 systemd-networkd[1463]: vxlan.calico: Gained IPv6LL Aug 13 01:15:22.508269 kubelet[2774]: E0813 01:15:22.507944 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:22.509392 containerd[1543]: time="2025-08-13T01:15:22.509047774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:22.509820 containerd[1543]: time="2025-08-13T01:15:22.509724650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:22.651649 kubelet[2774]: I0813 01:15:22.651610 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:22.651649 kubelet[2774]: I0813 01:15:22.651653 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:22.653922 kubelet[2774]: I0813 01:15:22.653900 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:15:22.671795 kubelet[2774]: I0813 01:15:22.671761 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:22.671795 kubelet[2774]: I0813 01:15:22.671881 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-wp8jf","kube-system/coredns-668d6bf9bc-qnwfr","calico-system/calico-kube-controllers-c5f875d88-fvdcx","calico-system/csi-node-driver-bc7dg","calico-system/calico-typha-bf7d6589c-n48dd","kube-system/kube-controller-manager-172-234-199-8","calico-system/calico-node-9bc9j","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8"] Aug 13 01:15:22.672319 kubelet[2774]: E0813 01:15:22.671916 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:15:22.672319 kubelet[2774]: E0813 01:15:22.671929 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:15:22.672319 kubelet[2774]: E0813 01:15:22.671936 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:15:22.672319 kubelet[2774]: E0813 01:15:22.671943 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:15:22.672319 kubelet[2774]: E0813 01:15:22.671954 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf7d6589c-n48dd" Aug 13 01:15:22.672319 kubelet[2774]: E0813 01:15:22.671963 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:15:22.672319 kubelet[2774]: E0813 01:15:22.671974 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-9bc9j" Aug 13 01:15:22.672319 kubelet[2774]: E0813 01:15:22.671984 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8p8zx" Aug 13 01:15:22.672319 kubelet[2774]: E0813 01:15:22.671994 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:15:22.672319 kubelet[2774]: E0813 01:15:22.672003 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:15:22.672319 kubelet[2774]: I0813 01:15:22.672014 2774 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:15:22.691794 systemd-networkd[1463]: calia6fa1b66844: Link UP Aug 13 01:15:22.693637 systemd-networkd[1463]: calia6fa1b66844: Gained carrier Aug 13 01:15:22.720462 containerd[1543]: 2025-08-13 01:15:22.572 [INFO][5202] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0 calico-kube-controllers-c5f875d88- calico-system e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9 825 0 2025-08-13 01:13:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c5f875d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-199-8 calico-kube-controllers-c5f875d88-fvdcx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia6fa1b66844 [] [] }} ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Namespace="calico-system" Pod="calico-kube-controllers-c5f875d88-fvdcx" WorkloadEndpoint="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-" Aug 13 01:15:22.720462 containerd[1543]: 2025-08-13 01:15:22.572 [INFO][5202] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Namespace="calico-system" Pod="calico-kube-controllers-c5f875d88-fvdcx" WorkloadEndpoint="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0" Aug 13 01:15:22.720462 containerd[1543]: 2025-08-13 01:15:22.622 [INFO][5219] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" HandleID="k8s-pod-network.783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Workload="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0" Aug 13 01:15:22.720747 containerd[1543]: 2025-08-13 01:15:22.622 [INFO][5219] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" HandleID="k8s-pod-network.783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Workload="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-199-8", "pod":"calico-kube-controllers-c5f875d88-fvdcx", "timestamp":"2025-08-13 01:15:22.62222862 +0000 UTC"}, Hostname:"172-234-199-8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:22.720747 containerd[1543]: 2025-08-13 01:15:22.622 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:22.720747 containerd[1543]: 2025-08-13 01:15:22.623 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:22.720747 containerd[1543]: 2025-08-13 01:15:22.623 [INFO][5219] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-199-8' Aug 13 01:15:22.720747 containerd[1543]: 2025-08-13 01:15:22.631 [INFO][5219] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" host="172-234-199-8" Aug 13 01:15:22.720747 containerd[1543]: 2025-08-13 01:15:22.641 [INFO][5219] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-199-8" Aug 13 01:15:22.720747 containerd[1543]: 2025-08-13 01:15:22.649 [INFO][5219] ipam/ipam.go 511: Trying affinity for 192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:22.720747 containerd[1543]: 2025-08-13 01:15:22.651 [INFO][5219] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:22.720747 containerd[1543]: 2025-08-13 01:15:22.655 [INFO][5219] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:22.720940 containerd[1543]: 2025-08-13 01:15:22.656 [INFO][5219] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" host="172-234-199-8" Aug 13 01:15:22.720940 containerd[1543]: 2025-08-13 01:15:22.658 [INFO][5219] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7 Aug 13 01:15:22.720940 containerd[1543]: 2025-08-13 01:15:22.669 [INFO][5219] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" host="172-234-199-8" Aug 13 01:15:22.720940 containerd[1543]: 2025-08-13 01:15:22.677 [INFO][5219] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.65/26] block=192.168.80.64/26 handle="k8s-pod-network.783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" host="172-234-199-8" Aug 13 01:15:22.720940 containerd[1543]: 2025-08-13 01:15:22.677 [INFO][5219] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.65/26] handle="k8s-pod-network.783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" host="172-234-199-8" Aug 13 01:15:22.720940 containerd[1543]: 2025-08-13 01:15:22.677 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:22.720940 containerd[1543]: 2025-08-13 01:15:22.677 [INFO][5219] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.65/26] IPv6=[] ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" HandleID="k8s-pod-network.783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Workload="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0" Aug 13 01:15:22.721088 containerd[1543]: 2025-08-13 01:15:22.686 [INFO][5202] cni-plugin/k8s.go 418: Populated endpoint ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Namespace="calico-system" Pod="calico-kube-controllers-c5f875d88-fvdcx" WorkloadEndpoint="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0", GenerateName:"calico-kube-controllers-c5f875d88-", Namespace:"calico-system", SelfLink:"", UID:"e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c5f875d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-8", ContainerID:"", Pod:"calico-kube-controllers-c5f875d88-fvdcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia6fa1b66844", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:22.721956 containerd[1543]: 2025-08-13 01:15:22.686 [INFO][5202] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.65/32] ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Namespace="calico-system" Pod="calico-kube-controllers-c5f875d88-fvdcx" WorkloadEndpoint="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0" Aug 13 01:15:22.721956 containerd[1543]: 2025-08-13 01:15:22.686 [INFO][5202] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6fa1b66844 ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Namespace="calico-system" Pod="calico-kube-controllers-c5f875d88-fvdcx" WorkloadEndpoint="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0" Aug 13 01:15:22.721956 containerd[1543]: 2025-08-13 01:15:22.694 [INFO][5202] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Namespace="calico-system" Pod="calico-kube-controllers-c5f875d88-fvdcx" WorkloadEndpoint="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0" Aug 13 01:15:22.722037 containerd[1543]: 2025-08-13 01:15:22.695 [INFO][5202] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Namespace="calico-system" Pod="calico-kube-controllers-c5f875d88-fvdcx" WorkloadEndpoint="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0", GenerateName:"calico-kube-controllers-c5f875d88-", Namespace:"calico-system", SelfLink:"", UID:"e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c5f875d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-8", ContainerID:"783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7", Pod:"calico-kube-controllers-c5f875d88-fvdcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia6fa1b66844", MAC:"12:0d:3d:55:7d:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:22.722113 containerd[1543]: 2025-08-13 01:15:22.710 [INFO][5202] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" Namespace="calico-system" Pod="calico-kube-controllers-c5f875d88-fvdcx" WorkloadEndpoint="172--234--199--8-k8s-calico--kube--controllers--c5f875d88--fvdcx-eth0" Aug 13 01:15:22.787128 containerd[1543]: time="2025-08-13T01:15:22.786702455Z" level=info msg="connecting to shim 783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7" address="unix:///run/containerd/s/3ed3353ec959e0219c3ace283defa59a0ccd918a323e4025d9c0fc84f0402281" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:22.823612 systemd-networkd[1463]: cali7bab3aa5f75: Link UP Aug 13 01:15:22.830517 systemd-networkd[1463]: cali7bab3aa5f75: Gained carrier Aug 13 01:15:22.849343 containerd[1543]: 2025-08-13 01:15:22.573 [INFO][5195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0 coredns-668d6bf9bc- kube-system 56ce87f4-747f-470b-8388-a8400bdda009 829 0 2025-08-13 01:13:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-199-8 coredns-668d6bf9bc-wp8jf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7bab3aa5f75 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-wp8jf" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-" Aug 13 01:15:22.849343 containerd[1543]: 2025-08-13 01:15:22.573 [INFO][5195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-wp8jf" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0" Aug 13 01:15:22.849343 containerd[1543]: 2025-08-13 01:15:22.622 [INFO][5221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" HandleID="k8s-pod-network.3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Workload="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0" Aug 13 01:15:22.850958 containerd[1543]: 2025-08-13 01:15:22.623 [INFO][5221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" HandleID="k8s-pod-network.3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Workload="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333660), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-199-8", "pod":"coredns-668d6bf9bc-wp8jf", "timestamp":"2025-08-13 01:15:22.622649062 +0000 UTC"}, Hostname:"172-234-199-8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:22.850958 containerd[1543]: 2025-08-13 01:15:22.624 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:22.850958 containerd[1543]: 2025-08-13 01:15:22.677 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:22.850958 containerd[1543]: 2025-08-13 01:15:22.678 [INFO][5221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-199-8' Aug 13 01:15:22.850958 containerd[1543]: 2025-08-13 01:15:22.733 [INFO][5221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" host="172-234-199-8" Aug 13 01:15:22.850958 containerd[1543]: 2025-08-13 01:15:22.755 [INFO][5221] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-199-8" Aug 13 01:15:22.850958 containerd[1543]: 2025-08-13 01:15:22.768 [INFO][5221] ipam/ipam.go 511: Trying affinity for 192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:22.850958 containerd[1543]: 2025-08-13 01:15:22.771 [INFO][5221] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:22.850958 containerd[1543]: 2025-08-13 01:15:22.776 [INFO][5221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:22.850958 containerd[1543]: 2025-08-13 01:15:22.776 [INFO][5221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" host="172-234-199-8" Aug 13 01:15:22.851800 containerd[1543]: 2025-08-13 01:15:22.780 [INFO][5221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed Aug 13 01:15:22.851800 containerd[1543]: 2025-08-13 01:15:22.791 [INFO][5221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" host="172-234-199-8" Aug 13 01:15:22.851800 containerd[1543]: 2025-08-13 01:15:22.806 [INFO][5221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.66/26] block=192.168.80.64/26 handle="k8s-pod-network.3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" host="172-234-199-8" Aug 13 01:15:22.851800 containerd[1543]: 2025-08-13 01:15:22.806 [INFO][5221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.66/26] handle="k8s-pod-network.3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" host="172-234-199-8" Aug 13 01:15:22.851800 containerd[1543]: 2025-08-13 01:15:22.806 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:22.851800 containerd[1543]: 2025-08-13 01:15:22.806 [INFO][5221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.66/26] IPv6=[] ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" HandleID="k8s-pod-network.3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Workload="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0" Aug 13 01:15:22.852061 containerd[1543]: 2025-08-13 01:15:22.816 [INFO][5195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-wp8jf" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"56ce87f4-747f-470b-8388-a8400bdda009", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-8", ContainerID:"", Pod:"coredns-668d6bf9bc-wp8jf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bab3aa5f75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:22.852061 containerd[1543]: 2025-08-13 01:15:22.817 [INFO][5195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.66/32] ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-wp8jf" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0" Aug 13 01:15:22.852061 containerd[1543]: 2025-08-13 01:15:22.817 [INFO][5195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bab3aa5f75 ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-wp8jf" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0" Aug 13 01:15:22.852061 containerd[1543]: 2025-08-13 01:15:22.823 [INFO][5195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-wp8jf" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0" Aug 13 01:15:22.852061 containerd[1543]: 2025-08-13 01:15:22.825 [INFO][5195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-wp8jf" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"56ce87f4-747f-470b-8388-a8400bdda009", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-8", ContainerID:"3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed", Pod:"coredns-668d6bf9bc-wp8jf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bab3aa5f75", MAC:"06:dc:f3:bf:6c:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:22.852061 containerd[1543]: 2025-08-13 01:15:22.844 [INFO][5195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" Namespace="kube-system" Pod="coredns-668d6bf9bc-wp8jf" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--wp8jf-eth0" Aug 13 01:15:22.872709 systemd[1]: Started cri-containerd-783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7.scope - libcontainer container 783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7. Aug 13 01:15:22.897049 containerd[1543]: time="2025-08-13T01:15:22.896860880Z" level=info msg="connecting to shim 3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed" address="unix:///run/containerd/s/122ce32899983f08fb58a43cd95f8907ca7d64e164b2c32ff13f1b75646c9a69" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:22.944440 systemd[1]: Started cri-containerd-3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed.scope - libcontainer container 3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed. Aug 13 01:15:22.990272 containerd[1543]: time="2025-08-13T01:15:22.990200278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5f875d88-fvdcx,Uid:e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9,Namespace:calico-system,Attempt:0,} returns sandbox id \"783a0d679a00e5124c6e5cde52c00fb2920d3de43cd16935e892b8ac524764a7\"" Aug 13 01:15:22.993779 containerd[1543]: time="2025-08-13T01:15:22.993745472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:15:23.032940 containerd[1543]: time="2025-08-13T01:15:23.032825089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wp8jf,Uid:56ce87f4-747f-470b-8388-a8400bdda009,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b03412c182aa3f8a3b45d0888db06d4a801def1ae1ae98c03a8d5fcea5560ed\"" Aug 13 01:15:23.034539 kubelet[2774]: E0813 01:15:23.034504 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:24.077454 systemd-networkd[1463]: cali7bab3aa5f75: Gained IPv6LL Aug 13 01:15:24.587648 systemd-networkd[1463]: calia6fa1b66844: Gained IPv6LL Aug 13 01:15:25.005885 containerd[1543]: time="2025-08-13T01:15:25.005431021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/75/fs/usr/bin/kube-controllers: no space left on device" Aug 13 01:15:25.005885 containerd[1543]: time="2025-08-13T01:15:25.005666422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 01:15:25.006888 kubelet[2774]: E0813 01:15:25.006738 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/75/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:15:25.006888 kubelet[2774]: E0813 01:15:25.006829 2774 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/75/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:15:25.008645 kubelet[2774]: E0813 01:15:25.007212 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4t6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c5f875d88-fvdcx_calico-system(e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/75/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" Aug 13 01:15:25.008753 containerd[1543]: time="2025-08-13T01:15:25.007755906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:15:25.010175 kubelet[2774]: E0813 01:15:25.010114 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/75/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" Aug 13 01:15:25.040365 kubelet[2774]: E0813 01:15:25.040273 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/75/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" podUID="e328d277-9a4f-4cd1-abd9-2dfeda4cfcc9" Aug 13 01:15:25.334588 systemd[1]: Started sshd@14-172.234.199.8:22-147.75.109.163:40678.service - OpenSSH per-connection server daemon (147.75.109.163:40678). Aug 13 01:15:25.700519 sshd[5354]: Accepted publickey for core from 147.75.109.163 port 40678 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:25.703605 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:25.710957 systemd-logind[1515]: New session 15 of user core. Aug 13 01:15:25.719564 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:15:25.786123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814364251.mount: Deactivated successfully. Aug 13 01:15:26.099454 sshd[5356]: Connection closed by 147.75.109.163 port 40678 Aug 13 01:15:26.103595 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:26.109429 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:15:26.109949 systemd[1]: sshd@14-172.234.199.8:22-147.75.109.163:40678.service: Deactivated successfully. Aug 13 01:15:26.113214 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:15:26.115728 systemd-logind[1515]: Removed session 15. Aug 13 01:15:26.170588 systemd[1]: Started sshd@15-172.234.199.8:22-147.75.109.163:40686.service - OpenSSH per-connection server daemon (147.75.109.163:40686). Aug 13 01:15:26.517509 containerd[1543]: time="2025-08-13T01:15:26.517457213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:26.545764 sshd[5382]: Accepted publickey for core from 147.75.109.163 port 40686 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:26.548722 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:26.563552 systemd-logind[1515]: New session 16 of user core. Aug 13 01:15:26.568667 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:15:26.682338 systemd-networkd[1463]: cali7401bcb3b79: Link UP Aug 13 01:15:26.683556 systemd-networkd[1463]: cali7401bcb3b79: Gained carrier Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.592 [INFO][5426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--199--8-k8s-csi--node--driver--bc7dg-eth0 csi-node-driver- calico-system e1069218-cdb9-4130-adce-4bdd23361a59 706 0 2025-08-13 01:13:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-199-8 csi-node-driver-bc7dg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7401bcb3b79 [] [] }} ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Namespace="calico-system" Pod="csi-node-driver-bc7dg" WorkloadEndpoint="172--234--199--8-k8s-csi--node--driver--bc7dg-" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.592 [INFO][5426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Namespace="calico-system" Pod="csi-node-driver-bc7dg" WorkloadEndpoint="172--234--199--8-k8s-csi--node--driver--bc7dg-eth0" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.625 [INFO][5439] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" HandleID="k8s-pod-network.de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Workload="172--234--199--8-k8s-csi--node--driver--bc7dg-eth0" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.626 [INFO][5439] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" HandleID="k8s-pod-network.de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Workload="172--234--199--8-k8s-csi--node--driver--bc7dg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f020), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-199-8", "pod":"csi-node-driver-bc7dg", "timestamp":"2025-08-13 01:15:26.62579138 +0000 UTC"}, Hostname:"172-234-199-8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.626 [INFO][5439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.626 [INFO][5439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.626 [INFO][5439] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-199-8' Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.635 [INFO][5439] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" host="172-234-199-8" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.643 [INFO][5439] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-199-8" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.652 [INFO][5439] ipam/ipam.go 511: Trying affinity for 192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.654 [INFO][5439] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.659 [INFO][5439] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.659 [INFO][5439] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" host="172-234-199-8" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.661 [INFO][5439] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633 Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.665 [INFO][5439] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" host="172-234-199-8" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.671 [INFO][5439] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.67/26] block=192.168.80.64/26 handle="k8s-pod-network.de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" host="172-234-199-8" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.671 [INFO][5439] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.67/26] handle="k8s-pod-network.de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" host="172-234-199-8" Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.671 [INFO][5439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:26.718861 containerd[1543]: 2025-08-13 01:15:26.671 [INFO][5439] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.67/26] IPv6=[] ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" HandleID="k8s-pod-network.de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Workload="172--234--199--8-k8s-csi--node--driver--bc7dg-eth0" Aug 13 01:15:26.721313 containerd[1543]: 2025-08-13 01:15:26.675 [INFO][5426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Namespace="calico-system" Pod="csi-node-driver-bc7dg" WorkloadEndpoint="172--234--199--8-k8s-csi--node--driver--bc7dg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--8-k8s-csi--node--driver--bc7dg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1069218-cdb9-4130-adce-4bdd23361a59", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-8", ContainerID:"", Pod:"csi-node-driver-bc7dg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7401bcb3b79", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:26.721313 containerd[1543]: 2025-08-13 01:15:26.675 [INFO][5426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.67/32] ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Namespace="calico-system" Pod="csi-node-driver-bc7dg" WorkloadEndpoint="172--234--199--8-k8s-csi--node--driver--bc7dg-eth0" Aug 13 01:15:26.721313 containerd[1543]: 2025-08-13 01:15:26.676 [INFO][5426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7401bcb3b79 ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Namespace="calico-system" Pod="csi-node-driver-bc7dg" WorkloadEndpoint="172--234--199--8-k8s-csi--node--driver--bc7dg-eth0" Aug 13 01:15:26.721313 containerd[1543]: 2025-08-13 01:15:26.684 [INFO][5426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Namespace="calico-system" Pod="csi-node-driver-bc7dg" WorkloadEndpoint="172--234--199--8-k8s-csi--node--driver--bc7dg-eth0" Aug 13 01:15:26.721313 containerd[1543]: 2025-08-13 01:15:26.685 [INFO][5426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Namespace="calico-system" Pod="csi-node-driver-bc7dg" WorkloadEndpoint="172--234--199--8-k8s-csi--node--driver--bc7dg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--8-k8s-csi--node--driver--bc7dg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1069218-cdb9-4130-adce-4bdd23361a59", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 13, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-8", ContainerID:"de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633", Pod:"csi-node-driver-bc7dg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7401bcb3b79", MAC:"f2:ff:f0:63:22:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:26.721313 containerd[1543]: 2025-08-13 01:15:26.706 [INFO][5426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" Namespace="calico-system" Pod="csi-node-driver-bc7dg" WorkloadEndpoint="172--234--199--8-k8s-csi--node--driver--bc7dg-eth0" Aug 13 01:15:26.814745 containerd[1543]: time="2025-08-13T01:15:26.813975304Z" level=info msg="connecting to shim de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633" address="unix:///run/containerd/s/9d62219bfeafffe9d3fc20e065ce19c4a2f4015452fcd846b1ccd2c004f596ea" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:26.897572 systemd[1]: Started cri-containerd-de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633.scope - libcontainer container de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633. Aug 13 01:15:26.907959 sshd[5436]: Connection closed by 147.75.109.163 port 40686 Aug 13 01:15:26.910481 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:26.918164 systemd[1]: sshd@15-172.234.199.8:22-147.75.109.163:40686.service: Deactivated successfully. Aug 13 01:15:26.922787 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:15:26.925926 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:15:26.928660 systemd-logind[1515]: Removed session 16. Aug 13 01:15:26.947286 containerd[1543]: time="2025-08-13T01:15:26.947242996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bc7dg,Uid:e1069218-cdb9-4130-adce-4bdd23361a59,Namespace:calico-system,Attempt:0,} returns sandbox id \"de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633\"" Aug 13 01:15:27.507330 kubelet[2774]: E0813 01:15:27.507264 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:28.307830 systemd-networkd[1463]: cali7401bcb3b79: Gained IPv6LL Aug 13 01:15:28.941003 containerd[1543]: time="2025-08-13T01:15:28.940109382Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/88/fs/coredns: no space left on device" Aug 13 01:15:28.941003 containerd[1543]: time="2025-08-13T01:15:28.940279963Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:15:28.941915 kubelet[2774]: E0813 01:15:28.940525 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/88/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:15:28.941915 kubelet[2774]: E0813 01:15:28.940601 2774 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/88/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:15:28.941915 kubelet[2774]: E0813 01:15:28.940983 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gj6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-668d6bf9bc-wp8jf_kube-system(56ce87f4-747f-470b-8388-a8400bdda009): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/88/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:15:28.946114 containerd[1543]: time="2025-08-13T01:15:28.942850460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:15:28.962273 kubelet[2774]: E0813 01:15:28.956306 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/88/fs/coredns: no space left on device\"" pod="kube-system/coredns-668d6bf9bc-wp8jf" podUID="56ce87f4-747f-470b-8388-a8400bdda009" Aug 13 01:15:29.055218 kubelet[2774]: E0813 01:15:29.054899 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:29.057635 kubelet[2774]: E0813 01:15:29.057583 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/88/fs/coredns: no space left on device\"" pod="kube-system/coredns-668d6bf9bc-wp8jf" podUID="56ce87f4-747f-470b-8388-a8400bdda009" Aug 13 01:15:30.074827 containerd[1543]: time="2025-08-13T01:15:30.074739898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:30.076402 containerd[1543]: time="2025-08-13T01:15:30.075812385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 01:15:30.076892 containerd[1543]: time="2025-08-13T01:15:30.076829622Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:30.079323 containerd[1543]: time="2025-08-13T01:15:30.079271097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:30.080440 containerd[1543]: time="2025-08-13T01:15:30.080394864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.137487954s" Aug 13 01:15:30.080506 containerd[1543]: time="2025-08-13T01:15:30.080441784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:15:30.084386 containerd[1543]: time="2025-08-13T01:15:30.084218629Z" level=info msg="CreateContainer within sandbox \"de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:15:30.099389 containerd[1543]: time="2025-08-13T01:15:30.098420090Z" level=info msg="Container 2baab26c04deeffbbbc803028025a921b6abe47e3abc8ccd631ea37d6faa3abb: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:30.109861 containerd[1543]: time="2025-08-13T01:15:30.109761753Z" level=info msg="CreateContainer within sandbox \"de9df3cb2eb8bc272847de2fe0f1dc19d15f085296607ae431c46593134b5633\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2baab26c04deeffbbbc803028025a921b6abe47e3abc8ccd631ea37d6faa3abb\"" Aug 13 01:15:30.112388 containerd[1543]: time="2025-08-13T01:15:30.110584548Z" level=info msg="StartContainer for \"2baab26c04deeffbbbc803028025a921b6abe47e3abc8ccd631ea37d6faa3abb\"" Aug 13 01:15:30.112388 containerd[1543]: time="2025-08-13T01:15:30.112067458Z" level=info msg="connecting to shim 2baab26c04deeffbbbc803028025a921b6abe47e3abc8ccd631ea37d6faa3abb" address="unix:///run/containerd/s/9d62219bfeafffe9d3fc20e065ce19c4a2f4015452fcd846b1ccd2c004f596ea" protocol=ttrpc version=3 Aug 13 01:15:30.151568 systemd[1]: Started cri-containerd-2baab26c04deeffbbbc803028025a921b6abe47e3abc8ccd631ea37d6faa3abb.scope - libcontainer container 2baab26c04deeffbbbc803028025a921b6abe47e3abc8ccd631ea37d6faa3abb. Aug 13 01:15:30.205858 containerd[1543]: time="2025-08-13T01:15:30.205720678Z" level=info msg="StartContainer for \"2baab26c04deeffbbbc803028025a921b6abe47e3abc8ccd631ea37d6faa3abb\" returns successfully" Aug 13 01:15:30.208776 containerd[1543]: time="2025-08-13T01:15:30.208735937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:15:30.507943 kubelet[2774]: E0813 01:15:30.507433 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:30.511111 containerd[1543]: time="2025-08-13T01:15:30.509513514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:30.724694 systemd-networkd[1463]: cali3446fa05a88: Link UP Aug 13 01:15:30.725533 systemd-networkd[1463]: cali3446fa05a88: Gained carrier Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.596 [INFO][5551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0 coredns-668d6bf9bc- kube-system 713f2d42-12d1-4f50-bdc7-9964c95a9e2a 826 0 2025-08-13 01:13:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-199-8 coredns-668d6bf9bc-qnwfr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3446fa05a88 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Namespace="kube-system" Pod="coredns-668d6bf9bc-qnwfr" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.596 [INFO][5551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Namespace="kube-system" Pod="coredns-668d6bf9bc-qnwfr" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.639 [INFO][5564] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" HandleID="k8s-pod-network.cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Workload="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.640 [INFO][5564] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" HandleID="k8s-pod-network.cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Workload="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57d0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-199-8", "pod":"coredns-668d6bf9bc-qnwfr", "timestamp":"2025-08-13 01:15:30.639810409 +0000 UTC"}, Hostname:"172-234-199-8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.640 [INFO][5564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.640 [INFO][5564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.640 [INFO][5564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-199-8' Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.658 [INFO][5564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" host="172-234-199-8" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.666 [INFO][5564] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-199-8" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.674 [INFO][5564] ipam/ipam.go 511: Trying affinity for 192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.676 [INFO][5564] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.680 [INFO][5564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="172-234-199-8" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.680 [INFO][5564] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" host="172-234-199-8" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.683 [INFO][5564] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72 Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.701 [INFO][5564] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" host="172-234-199-8" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.712 [INFO][5564] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.80.68/26] block=192.168.80.64/26 handle="k8s-pod-network.cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" host="172-234-199-8" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.712 [INFO][5564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.68/26] handle="k8s-pod-network.cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" host="172-234-199-8" Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.713 [INFO][5564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:30.772594 containerd[1543]: 2025-08-13 01:15:30.713 [INFO][5564] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.68/26] IPv6=[] ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" HandleID="k8s-pod-network.cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Workload="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0" Aug 13 01:15:30.773236 containerd[1543]: 2025-08-13 01:15:30.716 [INFO][5551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Namespace="kube-system" Pod="coredns-668d6bf9bc-qnwfr" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"713f2d42-12d1-4f50-bdc7-9964c95a9e2a", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-8", ContainerID:"", Pod:"coredns-668d6bf9bc-qnwfr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3446fa05a88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:30.773236 containerd[1543]: 2025-08-13 01:15:30.716 [INFO][5551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.68/32] ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Namespace="kube-system" Pod="coredns-668d6bf9bc-qnwfr" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0" Aug 13 01:15:30.773236 containerd[1543]: 2025-08-13 01:15:30.716 [INFO][5551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3446fa05a88 ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Namespace="kube-system" Pod="coredns-668d6bf9bc-qnwfr" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0" Aug 13 01:15:30.773236 containerd[1543]: 2025-08-13 01:15:30.727 [INFO][5551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Namespace="kube-system" Pod="coredns-668d6bf9bc-qnwfr" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0" Aug 13 01:15:30.773236 containerd[1543]: 2025-08-13 01:15:30.731 [INFO][5551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Namespace="kube-system" Pod="coredns-668d6bf9bc-qnwfr" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"713f2d42-12d1-4f50-bdc7-9964c95a9e2a", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-199-8", ContainerID:"cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72", Pod:"coredns-668d6bf9bc-qnwfr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3446fa05a88", MAC:"3e:b1:0a:8d:48:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:30.773236 containerd[1543]: 2025-08-13 01:15:30.753 [INFO][5551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" Namespace="kube-system" Pod="coredns-668d6bf9bc-qnwfr" WorkloadEndpoint="172--234--199--8-k8s-coredns--668d6bf9bc--qnwfr-eth0" Aug 13 01:15:30.828696 containerd[1543]: time="2025-08-13T01:15:30.828586689Z" level=info msg="connecting to shim cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72" address="unix:///run/containerd/s/179c2343a292efbeadf9c90eef75a349f2e2cd0195a574f78ca7fa2aadf3d72f" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:30.901655 systemd[1]: Started cri-containerd-cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72.scope - libcontainer container cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72. Aug 13 01:15:31.016226 containerd[1543]: time="2025-08-13T01:15:31.015979729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnwfr,Uid:713f2d42-12d1-4f50-bdc7-9964c95a9e2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd3b64c1a86b3be18ed72976419dc77d8a44c29018b54129407e8a17ff6eff72\"" Aug 13 01:15:31.018944 kubelet[2774]: E0813 01:15:31.018909 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:31.189733 containerd[1543]: time="2025-08-13T01:15:31.189658924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/usr/bin/node-driver-registrar: no space left on device" Aug 13 01:15:31.191747 containerd[1543]: time="2025-08-13T01:15:31.189793095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:15:31.191747 containerd[1543]: time="2025-08-13T01:15:31.190666390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:15:31.191902 kubelet[2774]: E0813 01:15:31.189986 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:15:31.191902 kubelet[2774]: E0813 01:15:31.190042 2774 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:15:31.191902 kubelet[2774]: E0813 01:15:31.190297 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bqfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bc7dg_calico-system(e1069218-cdb9-4130-adce-4bdd23361a59): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/usr/bin/node-driver-registrar: no space left on device" logger="UnhandledError" Aug 13 01:15:31.192605 kubelet[2774]: E0813 01:15:31.192168 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:15:31.883676 systemd-networkd[1463]: cali3446fa05a88: Gained IPv6LL Aug 13 01:15:31.990715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698525507.mount: Deactivated successfully. Aug 13 01:15:32.074123 kubelet[2774]: E0813 01:15:32.074056 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-bc7dg" podUID="e1069218-cdb9-4130-adce-4bdd23361a59" Aug 13 01:15:32.730066 kubelet[2774]: I0813 01:15:32.729999 2774 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:32.730515 kubelet[2774]: I0813 01:15:32.730148 2774 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:32.732946 kubelet[2774]: I0813 01:15:32.732905 2774 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:15:32.770727 kubelet[2774]: I0813 01:15:32.770579 2774 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:32.771372 kubelet[2774]: I0813 01:15:32.770991 2774 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-c5f875d88-fvdcx","kube-system/coredns-668d6bf9bc-wp8jf","kube-system/coredns-668d6bf9bc-qnwfr","calico-system/calico-typha-bf7d6589c-n48dd","calico-system/calico-node-9bc9j","kube-system/kube-controller-manager-172-234-199-8","kube-system/kube-proxy-8p8zx","kube-system/kube-apiserver-172-234-199-8","kube-system/kube-scheduler-172-234-199-8","calico-system/csi-node-driver-bc7dg"] Aug 13 01:15:32.771372 kubelet[2774]: E0813 01:15:32.771143 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-c5f875d88-fvdcx" Aug 13 01:15:32.771372 kubelet[2774]: E0813 01:15:32.771256 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-wp8jf" Aug 13 01:15:32.771372 kubelet[2774]: E0813 01:15:32.771269 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qnwfr" Aug 13 01:15:32.771372 kubelet[2774]: E0813 01:15:32.771280 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf7d6589c-n48dd" Aug 13 01:15:32.771372 kubelet[2774]: E0813 01:15:32.771290 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-9bc9j" Aug 13 01:15:32.771372 kubelet[2774]: E0813 01:15:32.771299 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-199-8" Aug 13 01:15:32.771372 kubelet[2774]: E0813 01:15:32.771308 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8p8zx" Aug 13 01:15:32.772607 kubelet[2774]: E0813 01:15:32.771417 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-199-8" Aug 13 01:15:32.772607 kubelet[2774]: E0813 01:15:32.771432 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-199-8" Aug 13 01:15:32.772607 kubelet[2774]: E0813 01:15:32.771440 2774 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bc7dg" Aug 13 01:15:32.772607 kubelet[2774]: I0813 01:15:32.771452 2774 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:15:33.229312 containerd[1543]: time="2025-08-13T01:15:33.229121350Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/106/fs/coredns: no space left on device" Aug 13 01:15:33.229312 containerd[1543]: time="2025-08-13T01:15:33.229256281Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:15:33.230010 kubelet[2774]: E0813 01:15:33.229694 2774 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/106/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:15:33.230010 kubelet[2774]: E0813 01:15:33.229810 2774 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/106/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:15:33.232097 kubelet[2774]: E0813 01:15:33.232005 2774 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnh4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-668d6bf9bc-qnwfr_kube-system(713f2d42-12d1-4f50-bdc7-9964c95a9e2a): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/106/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:15:33.233613 kubelet[2774]: E0813 01:15:33.233566 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/106/fs/coredns: no space left on device\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:15:34.077438 kubelet[2774]: E0813 01:15:34.077170 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Aug 13 01:15:34.079812 kubelet[2774]: E0813 01:15:34.079699 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\": failed to extract layer sha256:21c2962c204299c738896a757fbcc4190df6d7992af7b31457fb71bbac86df7c: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/106/fs/coredns: no space left on device\"" pod="kube-system/coredns-668d6bf9bc-qnwfr" podUID="713f2d42-12d1-4f50-bdc7-9964c95a9e2a" Aug 13 01:15:34.504393 kubelet[2774]: E0813 01:15:34.503767 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17"