Mar 17 17:38:31.912922 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:38:31.912945 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:38:31.912956 kernel: BIOS-provided physical RAM map: Mar 17 17:38:31.912962 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:38:31.912968 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:38:31.912974 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:38:31.912981 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 17 17:38:31.912987 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 17 17:38:31.912994 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 17:38:31.913002 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 17:38:31.913008 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:38:31.913019 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:38:31.913025 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:38:31.913031 kernel: NX (Execute Disable) protection: active Mar 17 17:38:31.913039 kernel: APIC: Static calls initialized Mar 17 17:38:31.913048 kernel: SMBIOS 2.8 present. Mar 17 17:38:31.913055 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 17 17:38:31.913061 kernel: Hypervisor detected: KVM Mar 17 17:38:31.913068 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:38:31.913075 kernel: kvm-clock: using sched offset of 2675489634 cycles Mar 17 17:38:31.913082 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:38:31.913089 kernel: tsc: Detected 2794.750 MHz processor Mar 17 17:38:31.913096 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:38:31.913103 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:38:31.913110 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 17 17:38:31.913119 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:38:31.913126 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:38:31.913133 kernel: Using GB pages for direct mapping Mar 17 17:38:31.913140 kernel: ACPI: Early table checksum verification disabled Mar 17 17:38:31.913147 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 17 17:38:31.913154 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:38:31.913161 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:38:31.913168 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:38:31.913176 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 17 17:38:31.913183 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:38:31.913190 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:38:31.913271 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:38:31.913278 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:38:31.913285 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 17 17:38:31.913292 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 17 17:38:31.913303 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 17 17:38:31.913312 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 17 17:38:31.913322 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 17 17:38:31.913329 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 17 17:38:31.913336 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 17 17:38:31.913343 kernel: No NUMA configuration found Mar 17 17:38:31.913350 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 17 17:38:31.913357 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 17 17:38:31.913367 kernel: Zone ranges: Mar 17 17:38:31.913374 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:38:31.913381 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 17 17:38:31.913388 kernel: Normal empty Mar 17 17:38:31.913395 kernel: Movable zone start for each node Mar 17 17:38:31.913403 kernel: Early memory node ranges Mar 17 17:38:31.913410 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:38:31.913417 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 17 17:38:31.913424 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 17 17:38:31.913433 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:38:31.913443 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:38:31.913450 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 17:38:31.913457 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:38:31.913464 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:38:31.913471 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:38:31.913478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:38:31.913485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:38:31.913492 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:38:31.913501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:38:31.913509 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:38:31.913516 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:38:31.913523 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:38:31.913530 kernel: TSC deadline timer available Mar 17 17:38:31.913539 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 17:38:31.913548 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:38:31.913557 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 17:38:31.913566 kernel: kvm-guest: setup PV sched yield Mar 17 17:38:31.913575 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 17:38:31.913586 kernel: Booting paravirtualized kernel on KVM Mar 17 17:38:31.913596 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:38:31.913605 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 17 17:38:31.913614 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 17 17:38:31.913623 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 17 17:38:31.913631 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 17:38:31.913640 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:38:31.913649 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:38:31.913659 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:38:31.913672 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:38:31.913680 kernel: random: crng init done Mar 17 17:38:31.913689 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:38:31.913699 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:38:31.913708 kernel: Fallback order for Node 0: 0 Mar 17 17:38:31.913717 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 17 17:38:31.913725 kernel: Policy zone: DMA32 Mar 17 17:38:31.913734 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:38:31.913744 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 136900K reserved, 0K cma-reserved) Mar 17 17:38:31.913752 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:38:31.913759 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:38:31.913766 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:38:31.913773 kernel: Dynamic Preempt: voluntary Mar 17 17:38:31.913780 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:38:31.913791 kernel: rcu: RCU event tracing is enabled. Mar 17 17:38:31.913799 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:38:31.913806 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:38:31.913824 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:38:31.913831 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:38:31.913841 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:38:31.913848 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:38:31.913855 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 17:38:31.913862 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:38:31.913869 kernel: Console: colour VGA+ 80x25 Mar 17 17:38:31.913876 kernel: printk: console [ttyS0] enabled Mar 17 17:38:31.913883 kernel: ACPI: Core revision 20230628 Mar 17 17:38:31.913893 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:38:31.913900 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:38:31.913907 kernel: x2apic enabled Mar 17 17:38:31.913914 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:38:31.913921 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 17 17:38:31.913929 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 17 17:38:31.913936 kernel: kvm-guest: setup PV IPIs Mar 17 17:38:31.913953 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:38:31.913960 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:38:31.913968 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 17 17:38:31.913975 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:38:31.913983 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:38:31.913992 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:38:31.914000 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:38:31.914007 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:38:31.914015 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:38:31.914023 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:38:31.914032 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:38:31.914040 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:38:31.914047 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:38:31.914055 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:38:31.914062 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:38:31.914070 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:38:31.914078 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:38:31.914086 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:38:31.914095 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:38:31.914103 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:38:31.914110 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:38:31.914118 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:38:31.914125 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:38:31.914133 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:38:31.914140 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:38:31.914147 kernel: landlock: Up and running. Mar 17 17:38:31.914155 kernel: SELinux: Initializing. Mar 17 17:38:31.914165 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:38:31.914172 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:38:31.914180 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:38:31.914187 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:38:31.914205 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:38:31.914213 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:38:31.914223 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:38:31.914231 kernel: ... version: 0 Mar 17 17:38:31.914238 kernel: ... bit width: 48 Mar 17 17:38:31.914248 kernel: ... generic registers: 6 Mar 17 17:38:31.914255 kernel: ... value mask: 0000ffffffffffff Mar 17 17:38:31.914263 kernel: ... max period: 00007fffffffffff Mar 17 17:38:31.914270 kernel: ... fixed-purpose events: 0 Mar 17 17:38:31.914278 kernel: ... event mask: 000000000000003f Mar 17 17:38:31.914285 kernel: signal: max sigframe size: 1776 Mar 17 17:38:31.914292 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:38:31.914300 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:38:31.914308 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:38:31.914317 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:38:31.914325 kernel: .... node #0, CPUs: #1 #2 #3 Mar 17 17:38:31.914332 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:38:31.914340 kernel: smpboot: Max logical packages: 1 Mar 17 17:38:31.914347 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 17 17:38:31.914354 kernel: devtmpfs: initialized Mar 17 17:38:31.914362 kernel: x86/mm: Memory block size: 128MB Mar 17 17:38:31.914369 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:38:31.914377 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:38:31.914387 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:38:31.914394 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:38:31.914402 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:38:31.914409 kernel: audit: type=2000 audit(1742233111.483:1): state=initialized audit_enabled=0 res=1 Mar 17 17:38:31.914417 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:38:31.914424 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:38:31.914432 kernel: cpuidle: using governor menu Mar 17 17:38:31.914439 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:38:31.914446 kernel: dca service started, version 1.12.1 Mar 17 17:38:31.914457 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 17:38:31.914465 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 17 17:38:31.914472 kernel: PCI: Using configuration type 1 for base access Mar 17 17:38:31.914480 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:38:31.914487 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:38:31.914495 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:38:31.914502 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:38:31.914510 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:38:31.914517 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:38:31.914527 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:38:31.914535 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:38:31.914542 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:38:31.914550 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:38:31.914557 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:38:31.914564 kernel: ACPI: Interpreter enabled Mar 17 17:38:31.914572 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:38:31.914579 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:38:31.914587 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:38:31.914597 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:38:31.914604 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:38:31.914612 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:38:31.914822 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:38:31.914954 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:38:31.915075 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:38:31.915085 kernel: PCI host bridge to bus 0000:00 Mar 17 17:38:31.915243 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:38:31.915359 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:38:31.915477 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:38:31.915609 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 17:38:31.915718 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 17:38:31.915835 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 17:38:31.915948 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:38:31.916100 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:38:31.916251 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 17:38:31.916374 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 17 17:38:31.916494 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 17 17:38:31.916638 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 17 17:38:31.916779 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:38:31.916931 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:38:31.917062 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 17:38:31.917183 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 17 17:38:31.917335 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 17 17:38:31.917474 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:38:31.917618 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:38:31.917739 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 17 17:38:31.917875 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 17 17:38:31.918014 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:38:31.918137 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 17 17:38:31.918276 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 17 17:38:31.918404 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 17 17:38:31.918581 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 17 17:38:31.918863 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:38:31.919002 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:38:31.919151 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:38:31.919297 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 17 17:38:31.919427 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 17 17:38:31.919574 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:38:31.919698 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 17:38:31.919708 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:38:31.919720 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:38:31.919729 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:38:31.919736 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:38:31.919744 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:38:31.919752 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:38:31.919759 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:38:31.919767 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:38:31.919775 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:38:31.919782 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:38:31.919792 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:38:31.919800 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:38:31.919807 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:38:31.919823 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:38:31.919831 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:38:31.919838 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:38:31.919846 kernel: iommu: Default domain type: Translated Mar 17 17:38:31.919854 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:38:31.919861 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:38:31.919872 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:38:31.919880 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:38:31.919887 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 17 17:38:31.920009 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:38:31.920130 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:38:31.920264 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:38:31.920275 kernel: vgaarb: loaded Mar 17 17:38:31.920286 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:38:31.920306 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:38:31.920319 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:38:31.920331 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:38:31.920345 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:38:31.920358 kernel: pnp: PnP ACPI init Mar 17 17:38:31.920551 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 17:38:31.920567 kernel: pnp: PnP ACPI: found 6 devices Mar 17 17:38:31.920576 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:38:31.920590 kernel: NET: Registered PF_INET protocol family Mar 17 17:38:31.920600 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:38:31.920609 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:38:31.920619 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:38:31.920629 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:38:31.920639 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:38:31.920648 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:38:31.920658 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:38:31.920667 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:38:31.920680 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:38:31.920690 kernel: NET: Registered PF_XDP protocol family Mar 17 17:38:31.920869 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:38:31.920984 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:38:31.921097 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:38:31.921234 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 17:38:31.921348 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 17:38:31.921459 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 17:38:31.921474 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:38:31.921482 kernel: Initialise system trusted keyrings Mar 17 17:38:31.921490 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:38:31.921498 kernel: Key type asymmetric registered Mar 17 17:38:31.921505 kernel: Asymmetric key parser 'x509' registered Mar 17 17:38:31.921514 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:38:31.921523 kernel: io scheduler mq-deadline registered Mar 17 17:38:31.921531 kernel: io scheduler kyber registered Mar 17 17:38:31.921539 kernel: io scheduler bfq registered Mar 17 17:38:31.921549 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:38:31.921558 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:38:31.921566 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:38:31.921574 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 17:38:31.921581 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:38:31.921589 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:38:31.921597 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:38:31.921605 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:38:31.921612 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:38:31.921746 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 17:38:31.921761 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:38:31.921887 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 17:38:31.922002 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T17:38:31 UTC (1742233111) Mar 17 17:38:31.922114 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 17:38:31.922124 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:38:31.922132 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:38:31.922139 kernel: Segment Routing with IPv6 Mar 17 17:38:31.922150 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:38:31.922158 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:38:31.922166 kernel: Key type dns_resolver registered Mar 17 17:38:31.922173 kernel: IPI shorthand broadcast: enabled Mar 17 17:38:31.922181 kernel: sched_clock: Marking stable (741002670, 104731472)->(860669926, -14935784) Mar 17 17:38:31.922189 kernel: registered taskstats version 1 Mar 17 17:38:31.922283 kernel: Loading compiled-in X.509 certificates Mar 17 17:38:31.922291 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:38:31.922299 kernel: Key type .fscrypt registered Mar 17 17:38:31.922324 kernel: Key type fscrypt-provisioning registered Mar 17 17:38:31.922339 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:38:31.922359 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:38:31.922367 kernel: ima: No architecture policies found Mar 17 17:38:31.922375 kernel: clk: Disabling unused clocks Mar 17 17:38:31.922382 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:38:31.922390 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:38:31.922398 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:38:31.922405 kernel: Run /init as init process Mar 17 17:38:31.922415 kernel: with arguments: Mar 17 17:38:31.922423 kernel: /init Mar 17 17:38:31.922430 kernel: with environment: Mar 17 17:38:31.922438 kernel: HOME=/ Mar 17 17:38:31.922446 kernel: TERM=linux Mar 17 17:38:31.922457 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:38:31.922467 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:38:31.922477 systemd[1]: Detected virtualization kvm. Mar 17 17:38:31.922488 systemd[1]: Detected architecture x86-64. Mar 17 17:38:31.922496 systemd[1]: Running in initrd. Mar 17 17:38:31.922504 systemd[1]: No hostname configured, using default hostname. Mar 17 17:38:31.922512 systemd[1]: Hostname set to . Mar 17 17:38:31.922520 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:38:31.922528 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:38:31.922536 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:38:31.922545 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:38:31.922556 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:38:31.922576 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:38:31.922586 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:38:31.922595 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:38:31.922605 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:38:31.922616 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:38:31.922624 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:38:31.922633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:38:31.922641 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:38:31.922649 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:38:31.922658 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:38:31.922666 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:38:31.922674 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:38:31.922685 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:38:31.922693 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:38:31.922702 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:38:31.922710 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:38:31.922718 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:38:31.922727 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:38:31.922735 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:38:31.922743 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:38:31.922751 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:38:31.922762 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:38:31.922771 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:38:31.922779 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:38:31.922787 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:38:31.922795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:38:31.922804 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:38:31.922812 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:38:31.922834 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:38:31.922867 systemd-journald[193]: Collecting audit messages is disabled. Mar 17 17:38:31.922890 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:38:31.922901 systemd-journald[193]: Journal started Mar 17 17:38:31.922922 systemd-journald[193]: Runtime Journal (/run/log/journal/16d85a65cb0a43a98b4eac998238e2c6) is 6.0M, max 48.4M, 42.3M free. Mar 17 17:38:31.920940 systemd-modules-load[194]: Inserted module 'overlay' Mar 17 17:38:31.958721 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:38:31.958749 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:38:31.958762 kernel: Bridge firewalling registered Mar 17 17:38:31.949593 systemd-modules-load[194]: Inserted module 'br_netfilter' Mar 17 17:38:31.959093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:38:31.959768 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:38:31.978485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:38:31.980873 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:38:31.981657 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:38:31.983627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:38:31.990209 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:38:31.991695 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:38:31.996644 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:38:31.998206 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:38:32.008909 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:38:32.019962 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:38:32.032567 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:38:32.032639 systemd-resolved[216]: Positive Trust Anchors: Mar 17 17:38:32.032650 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:38:32.032681 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:38:32.035372 systemd-resolved[216]: Defaulting to hostname 'linux'. Mar 17 17:38:32.036488 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:38:32.037251 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:38:32.057141 dracut-cmdline[229]: dracut-dracut-053 Mar 17 17:38:32.061115 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:38:32.147244 kernel: SCSI subsystem initialized Mar 17 17:38:32.156226 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:38:32.167231 kernel: iscsi: registered transport (tcp) Mar 17 17:38:32.192246 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:38:32.192332 kernel: QLogic iSCSI HBA Driver Mar 17 17:38:32.248440 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:38:32.268474 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:38:32.295462 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:38:32.295537 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:38:32.296484 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:38:32.339252 kernel: raid6: avx2x4 gen() 30487 MB/s Mar 17 17:38:32.356238 kernel: raid6: avx2x2 gen() 30969 MB/s Mar 17 17:38:32.373308 kernel: raid6: avx2x1 gen() 26012 MB/s Mar 17 17:38:32.373354 kernel: raid6: using algorithm avx2x2 gen() 30969 MB/s Mar 17 17:38:32.391328 kernel: raid6: .... xor() 19932 MB/s, rmw enabled Mar 17 17:38:32.391410 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:38:32.412247 kernel: xor: automatically using best checksumming function avx Mar 17 17:38:32.563255 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:38:32.579283 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:38:32.591482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:38:32.603486 systemd-udevd[413]: Using default interface naming scheme 'v255'. Mar 17 17:38:32.608330 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:38:32.619367 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:38:32.637338 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Mar 17 17:38:32.673774 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:38:32.682469 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:38:32.743956 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:38:32.757607 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:38:32.770396 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:38:32.773428 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:38:32.774728 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:38:32.775951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:38:32.781674 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 17 17:38:32.822119 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:38:32.822320 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:38:32.822332 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:38:32.822343 kernel: GPT:9289727 != 19775487 Mar 17 17:38:32.822359 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:38:32.822369 kernel: GPT:9289727 != 19775487 Mar 17 17:38:32.822379 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:38:32.822389 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:38:32.822399 kernel: libata version 3.00 loaded. Mar 17 17:38:32.822409 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:38:32.834103 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:38:32.834118 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:38:32.839477 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:38:32.840719 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:38:32.840735 kernel: AES CTR mode by8 optimization enabled Mar 17 17:38:32.840746 kernel: scsi host0: ahci Mar 17 17:38:32.841660 kernel: scsi host1: ahci Mar 17 17:38:32.841828 kernel: scsi host2: ahci Mar 17 17:38:32.841989 kernel: scsi host3: ahci Mar 17 17:38:32.842139 kernel: scsi host4: ahci Mar 17 17:38:32.842315 kernel: scsi host5: ahci Mar 17 17:38:32.842480 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 17 17:38:32.842496 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 17 17:38:32.842506 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 17 17:38:32.842517 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 17 17:38:32.842527 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 17 17:38:32.842537 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 17 17:38:32.789498 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:38:32.800405 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:38:32.808303 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:38:32.808441 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:38:32.810242 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:38:32.811391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:38:32.811510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:38:32.812668 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:38:32.846425 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (458) Mar 17 17:38:32.826514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:38:32.850318 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (474) Mar 17 17:38:32.852074 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:38:32.858938 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:38:32.871616 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:38:32.896641 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:38:32.899174 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:38:32.913313 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:38:32.913574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:38:32.917937 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:38:32.937505 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:38:33.145854 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:38:33.145944 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:38:33.145958 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 17:38:33.147223 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:38:33.148219 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:38:33.149220 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:38:33.149233 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:38:33.150240 kernel: ata3.00: applying bridge limits Mar 17 17:38:33.151220 kernel: ata3.00: configured for UDMA/100 Mar 17 17:38:33.151241 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:38:33.198462 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:38:33.210904 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:38:33.210929 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:38:33.333530 disk-uuid[555]: Primary Header is updated. Mar 17 17:38:33.333530 disk-uuid[555]: Secondary Entries is updated. Mar 17 17:38:33.333530 disk-uuid[555]: Secondary Header is updated. Mar 17 17:38:33.338232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:38:33.343226 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:38:34.344224 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:38:34.344468 disk-uuid[579]: The operation has completed successfully. Mar 17 17:38:34.403461 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:38:34.403585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:38:34.411410 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:38:34.414452 sh[592]: Success Mar 17 17:38:34.426230 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:38:34.459608 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:38:34.472817 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:38:34.475635 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:38:34.487358 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:38:34.487386 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:38:34.487397 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:38:34.488374 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:38:34.489717 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:38:34.494091 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:38:34.496565 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:38:34.506330 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:38:34.508894 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:38:34.517232 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:38:34.517265 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:38:34.517276 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:38:34.521242 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:38:34.530114 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:38:34.532099 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:38:34.542383 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:38:34.553454 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:38:34.632476 ignition[690]: Ignition 2.20.0 Mar 17 17:38:34.632492 ignition[690]: Stage: fetch-offline Mar 17 17:38:34.632541 ignition[690]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:38:34.632554 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:38:34.632676 ignition[690]: parsed url from cmdline: "" Mar 17 17:38:34.632681 ignition[690]: no config URL provided Mar 17 17:38:34.632686 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:38:34.632696 ignition[690]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:38:34.632727 ignition[690]: op(1): [started] loading QEMU firmware config module Mar 17 17:38:34.632732 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:38:34.639249 ignition[690]: op(1): [finished] loading QEMU firmware config module Mar 17 17:38:34.651049 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:38:34.667381 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:38:34.683110 ignition[690]: parsing config with SHA512: dd10c621aad1ffd073e726b5757560f0c9b1a84a0ffd44129db6c5da570d7eb4b725b7c5efe1676b07a2ef00a6a0d710ecb771e92d4ade6d45f445280736c1f8 Mar 17 17:38:34.688164 systemd-networkd[782]: lo: Link UP Mar 17 17:38:34.688172 systemd-networkd[782]: lo: Gained carrier Mar 17 17:38:34.690010 systemd-networkd[782]: Enumeration completed Mar 17 17:38:34.690406 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:38:34.690410 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:38:34.692623 ignition[690]: fetch-offline: fetch-offline passed Mar 17 17:38:34.691462 systemd-networkd[782]: eth0: Link UP Mar 17 17:38:34.692698 ignition[690]: Ignition finished successfully Mar 17 17:38:34.691466 systemd-networkd[782]: eth0: Gained carrier Mar 17 17:38:34.691473 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:38:34.692209 unknown[690]: fetched base config from "system" Mar 17 17:38:34.692217 unknown[690]: fetched user config from "qemu" Mar 17 17:38:34.692722 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:38:34.695960 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:38:34.699113 systemd[1]: Reached target network.target - Network. Mar 17 17:38:34.700571 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:38:34.701237 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:38:34.712351 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:38:34.732455 ignition[785]: Ignition 2.20.0 Mar 17 17:38:34.732467 ignition[785]: Stage: kargs Mar 17 17:38:34.732630 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:38:34.732641 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:38:34.736393 ignition[785]: kargs: kargs passed Mar 17 17:38:34.737018 ignition[785]: Ignition finished successfully Mar 17 17:38:34.740853 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:38:34.758403 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:38:34.781948 ignition[794]: Ignition 2.20.0 Mar 17 17:38:34.781960 ignition[794]: Stage: disks Mar 17 17:38:34.782127 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:38:34.782138 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:38:34.785944 ignition[794]: disks: disks passed Mar 17 17:38:34.786586 ignition[794]: Ignition finished successfully Mar 17 17:38:34.789530 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:38:34.790989 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:38:34.793295 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:38:34.794831 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:38:34.797352 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:38:34.798645 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:38:34.816361 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:38:34.827943 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:38:34.836564 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:38:34.845299 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:38:34.930231 kernel: EXT4-fs (vda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:38:34.931017 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:38:34.932604 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:38:34.943358 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:38:34.945440 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:38:34.947941 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:38:34.952384 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Mar 17 17:38:34.952409 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:38:34.947989 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:38:34.958796 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:38:34.958818 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:38:34.958829 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:38:34.948012 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:38:34.953111 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:38:34.959846 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:38:34.962790 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:38:35.003163 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:38:35.007082 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:38:35.011661 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:38:35.015015 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:38:35.090765 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:38:35.100304 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:38:35.101996 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:38:35.108217 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:38:35.124150 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:38:35.266325 ignition[929]: INFO : Ignition 2.20.0 Mar 17 17:38:35.266325 ignition[929]: INFO : Stage: mount Mar 17 17:38:35.268297 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:38:35.268297 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:38:35.268297 ignition[929]: INFO : mount: mount passed Mar 17 17:38:35.268297 ignition[929]: INFO : Ignition finished successfully Mar 17 17:38:35.273649 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:38:35.287335 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:38:35.487343 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:38:35.500385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:38:35.507224 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (938) Mar 17 17:38:35.507261 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:38:35.508585 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:38:35.508598 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:38:35.512219 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:38:35.513275 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:38:35.553403 ignition[955]: INFO : Ignition 2.20.0 Mar 17 17:38:35.553403 ignition[955]: INFO : Stage: files Mar 17 17:38:35.555662 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:38:35.555662 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:38:35.555662 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:38:35.559653 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:38:35.559653 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:38:35.559653 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:38:35.559653 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:38:35.565944 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:38:35.565944 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:38:35.565944 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:38:35.565944 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:38:35.565944 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:38:35.559864 unknown[955]: wrote ssh authorized keys file for user: core Mar 17 17:38:35.631935 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:38:35.836811 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:38:35.836811 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:38:35.841276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:38:36.323102 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:38:36.415407 systemd-networkd[782]: eth0: Gained IPv6LL Mar 17 17:38:36.634128 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:38:36.634128 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 17 17:38:36.638504 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:38:36.641484 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:38:36.641484 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 17 17:38:36.641484 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 17 17:38:36.646993 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:38:36.646993 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:38:36.646993 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 17 17:38:36.646993 ignition[955]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 17 17:38:36.654111 ignition[955]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:38:36.654111 ignition[955]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:38:36.654111 ignition[955]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 17 17:38:36.654111 ignition[955]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:38:36.679470 ignition[955]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:38:36.684432 ignition[955]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:38:36.686250 ignition[955]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:38:36.686250 ignition[955]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:38:36.686250 ignition[955]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:38:36.690586 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:38:36.692364 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:38:36.694040 ignition[955]: INFO : files: files passed Mar 17 17:38:36.694799 ignition[955]: INFO : Ignition finished successfully Mar 17 17:38:36.698175 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:38:36.710345 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:38:36.712161 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:38:36.714206 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:38:36.714313 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:38:36.723183 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:38:36.726069 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:38:36.726069 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:38:36.729443 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:38:36.733408 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:38:36.736114 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:38:36.747465 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:38:36.779637 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:38:36.780851 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:38:36.783499 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:38:36.785582 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:38:36.787746 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:38:36.798465 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:38:36.814637 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:38:36.823554 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:38:36.835187 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:38:36.838318 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:38:36.841582 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:38:36.844076 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:38:36.845436 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:38:36.848900 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:38:36.850988 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:38:36.852841 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:38:36.855410 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:38:36.858311 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:38:36.861013 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:38:36.863487 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:38:36.866256 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:38:36.868438 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:38:36.870638 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:38:36.872294 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:38:36.873456 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:38:36.875920 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:38:36.878447 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:38:36.881313 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:38:36.882444 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:38:36.885214 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:38:36.886242 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:38:36.888811 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:38:36.889984 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:38:36.892738 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:38:36.894879 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:38:36.898297 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:38:36.901031 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:38:36.903028 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:38:36.905143 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:38:36.906036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:38:36.908293 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:38:36.909415 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:38:36.911751 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:38:36.912999 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:38:36.915589 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:38:36.916616 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:38:36.933484 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:38:36.935496 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:38:36.936621 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:38:36.940084 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:38:36.941896 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:38:36.942019 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:38:36.945619 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:38:36.946984 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:38:36.949380 ignition[1010]: INFO : Ignition 2.20.0 Mar 17 17:38:36.949380 ignition[1010]: INFO : Stage: umount Mar 17 17:38:36.949380 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:38:36.949380 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:38:36.955338 ignition[1010]: INFO : umount: umount passed Mar 17 17:38:36.955338 ignition[1010]: INFO : Ignition finished successfully Mar 17 17:38:36.953388 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:38:36.955239 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:38:36.962881 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:38:36.963072 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:38:36.967909 systemd[1]: Stopped target network.target - Network. Mar 17 17:38:36.969876 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:38:36.969969 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:38:36.973090 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:38:36.974101 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:38:36.976269 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:38:36.976343 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:38:36.979402 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:38:36.979493 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:38:36.983093 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:38:36.984518 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:38:36.989244 systemd-networkd[782]: eth0: DHCPv6 lease lost Mar 17 17:38:36.989258 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:38:36.993548 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:38:36.993747 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:38:36.995406 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:38:36.995457 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:38:37.004516 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:38:37.006570 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:38:37.006652 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:38:37.010688 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:38:37.013793 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:38:37.014976 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:38:37.020319 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:38:37.021385 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:38:37.022861 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:38:37.022921 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:38:37.024324 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:38:37.024382 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:38:37.030583 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:38:37.032853 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:38:37.035311 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:38:37.036456 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:38:37.040535 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:38:37.040630 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:38:37.044395 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:38:37.044447 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:38:37.047709 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:38:37.047775 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:38:37.050871 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:38:37.050925 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:38:37.054025 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:38:37.054998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:38:37.067437 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:38:37.069778 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:38:37.069852 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:38:37.073252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:38:37.074255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:38:37.077858 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:38:37.079007 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:38:37.158412 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:38:37.159452 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:38:37.161948 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:38:37.164171 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:38:37.164248 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:38:37.177448 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:38:37.186235 systemd[1]: Switching root. Mar 17 17:38:37.219546 systemd-journald[193]: Journal stopped Mar 17 17:38:38.449314 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 17 17:38:38.449390 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:38:38.449411 kernel: SELinux: policy capability open_perms=1 Mar 17 17:38:38.449424 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:38:38.449438 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:38:38.449450 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:38:38.449461 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:38:38.449472 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:38:38.449483 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:38:38.449497 kernel: audit: type=1403 audit(1742233117.711:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:38:38.449514 systemd[1]: Successfully loaded SELinux policy in 45.598ms. Mar 17 17:38:38.449533 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.770ms. Mar 17 17:38:38.449546 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:38:38.449558 systemd[1]: Detected virtualization kvm. Mar 17 17:38:38.449570 systemd[1]: Detected architecture x86-64. Mar 17 17:38:38.449582 systemd[1]: Detected first boot. Mar 17 17:38:38.449594 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:38:38.449606 zram_generator::config[1075]: No configuration found. Mar 17 17:38:38.449622 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:38:38.449642 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:38:38.449655 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:38:38.449667 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:38:38.449679 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:38:38.449691 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:38:38.449709 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:38:38.449721 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:38:38.449736 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:38:38.449748 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:38:38.449761 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:38:38.449773 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:38:38.449785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:38:38.449797 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:38:38.449809 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:38:38.449821 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:38:38.449833 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:38:38.449848 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:38:38.449860 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:38:38.449873 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:38:38.449884 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:38:38.449896 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:38:38.449908 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:38:38.449921 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:38:38.449932 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:38:38.449947 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:38:38.449959 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:38:38.449972 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:38:38.449984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:38:38.449996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:38:38.450008 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:38:38.450020 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:38:38.450033 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:38:38.450044 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:38:38.450059 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:38:38.450071 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:38:38.450083 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:38:38.450094 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:38:38.450107 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:38:38.450119 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:38:38.450131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:38:38.450143 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:38:38.450156 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:38:38.450171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:38:38.450183 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:38:38.450324 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:38:38.450339 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:38:38.450351 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:38:38.450364 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:38:38.450376 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 17:38:38.450389 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 17 17:38:38.450404 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:38:38.450416 kernel: fuse: init (API version 7.39) Mar 17 17:38:38.450428 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:38:38.450440 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:38:38.450452 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:38:38.450466 kernel: loop: module loaded Mar 17 17:38:38.450501 systemd-journald[1154]: Collecting audit messages is disabled. Mar 17 17:38:38.450527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:38:38.450540 systemd-journald[1154]: Journal started Mar 17 17:38:38.450562 systemd-journald[1154]: Runtime Journal (/run/log/journal/16d85a65cb0a43a98b4eac998238e2c6) is 6.0M, max 48.4M, 42.3M free. Mar 17 17:38:38.457224 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:38:38.460232 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:38:38.460278 kernel: ACPI: bus type drm_connector registered Mar 17 17:38:38.462426 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:38:38.464297 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:38:38.465734 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:38:38.467134 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:38:38.468563 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:38:38.469834 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:38:38.471303 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:38:38.472997 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:38:38.474486 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:38:38.474709 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:38:38.476543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:38:38.476766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:38:38.478397 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:38:38.478608 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:38:38.480062 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:38:38.480286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:38:38.481829 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:38:38.482043 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:38:38.483460 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:38:38.483700 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:38:38.485548 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:38:38.487088 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:38:38.488833 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:38:38.504143 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:38:38.518261 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:38:38.520874 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:38:38.522069 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:38:38.524911 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:38:38.529428 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:38:38.531338 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:38:38.533960 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:38:38.535919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:38:38.542416 systemd-journald[1154]: Time spent on flushing to /var/log/journal/16d85a65cb0a43a98b4eac998238e2c6 is 51.224ms for 937 entries. Mar 17 17:38:38.542416 systemd-journald[1154]: System Journal (/var/log/journal/16d85a65cb0a43a98b4eac998238e2c6) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:38:38.628174 systemd-journald[1154]: Received client request to flush runtime journal. Mar 17 17:38:38.542334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:38:38.547025 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:38:38.550440 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:38:38.552192 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:38:38.608018 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:38:38.610655 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:38:38.612776 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:38:38.619596 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:38:38.630999 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:38:38.634009 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:38:38.641057 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Mar 17 17:38:38.641076 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Mar 17 17:38:38.645541 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:38:38.648007 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:38:38.656413 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:38:38.682309 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:38:38.693486 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:38:38.710705 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Mar 17 17:38:38.710727 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Mar 17 17:38:38.717822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:38:39.296941 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:38:39.311463 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:38:39.345658 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Mar 17 17:38:39.367146 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:38:39.383459 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:38:39.396434 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:38:39.449220 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1254) Mar 17 17:38:39.494752 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 17 17:38:39.507033 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:38:39.548251 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:38:39.558232 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 17:38:39.587267 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:38:39.600096 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:38:39.600434 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:38:39.600654 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:38:39.620303 systemd-networkd[1244]: lo: Link UP Mar 17 17:38:39.621752 systemd-networkd[1244]: lo: Gained carrier Mar 17 17:38:39.625468 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:38:39.626315 systemd-networkd[1244]: Enumeration completed Mar 17 17:38:39.626818 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:38:39.626823 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:38:39.627698 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:38:39.629098 systemd-networkd[1244]: eth0: Link UP Mar 17 17:38:39.629218 systemd-networkd[1244]: eth0: Gained carrier Mar 17 17:38:39.629314 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:38:39.633218 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:38:39.639544 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:38:39.644315 systemd-networkd[1244]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:38:39.649483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:38:39.716855 kernel: kvm_amd: TSC scaling supported Mar 17 17:38:39.716948 kernel: kvm_amd: Nested Virtualization enabled Mar 17 17:38:39.716967 kernel: kvm_amd: Nested Paging enabled Mar 17 17:38:39.716983 kernel: kvm_amd: LBR virtualization supported Mar 17 17:38:39.718266 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 17 17:38:39.718331 kernel: kvm_amd: Virtual GIF supported Mar 17 17:38:39.742438 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:38:39.781016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:38:39.782775 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:38:39.794420 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:38:39.805502 lvm[1284]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:38:39.842209 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:38:39.844110 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:38:39.856538 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:38:39.864657 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:38:39.908412 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:38:39.911272 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:38:39.912817 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:38:39.912861 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:38:39.914088 systemd[1]: Reached target machines.target - Containers. Mar 17 17:38:39.917033 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:38:39.931412 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:38:39.934832 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:38:39.936192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:38:39.937813 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:38:39.941387 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:38:39.945578 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:38:39.948465 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:38:39.958332 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:38:39.962237 kernel: loop0: detected capacity change from 0 to 140992 Mar 17 17:38:39.978856 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:38:39.979976 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:38:39.991256 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:38:40.020229 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 17:38:40.088342 kernel: loop2: detected capacity change from 0 to 138184 Mar 17 17:38:40.117231 kernel: loop3: detected capacity change from 0 to 140992 Mar 17 17:38:40.135229 kernel: loop4: detected capacity change from 0 to 210664 Mar 17 17:38:40.142237 kernel: loop5: detected capacity change from 0 to 138184 Mar 17 17:38:40.148403 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:38:40.149011 (sd-merge)[1310]: Merged extensions into '/usr'. Mar 17 17:38:40.153681 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:38:40.153699 systemd[1]: Reloading... Mar 17 17:38:40.274253 zram_generator::config[1338]: No configuration found. Mar 17 17:38:40.362337 ldconfig[1291]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:38:40.431078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:38:40.498334 systemd[1]: Reloading finished in 344 ms. Mar 17 17:38:40.517928 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:38:40.519919 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:38:40.535603 systemd[1]: Starting ensure-sysext.service... Mar 17 17:38:40.539066 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:38:40.543903 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:38:40.543921 systemd[1]: Reloading... Mar 17 17:38:40.571489 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:38:40.571868 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:38:40.572924 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:38:40.573231 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Mar 17 17:38:40.573307 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Mar 17 17:38:40.577870 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:38:40.577946 systemd-tmpfiles[1383]: Skipping /boot Mar 17 17:38:40.592859 zram_generator::config[1412]: No configuration found. Mar 17 17:38:40.600552 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:38:40.600704 systemd-tmpfiles[1383]: Skipping /boot Mar 17 17:38:40.717061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:38:40.784614 systemd[1]: Reloading finished in 240 ms. Mar 17 17:38:40.805342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:38:40.829358 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:38:40.833409 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:38:40.838394 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:38:40.846627 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:38:40.850581 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:38:40.858456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:38:40.858840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:38:40.861941 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:38:40.869551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:38:40.873576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:38:40.874825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:38:40.874928 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:38:40.875969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:38:40.878952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:38:40.903780 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:38:40.916279 augenrules[1488]: No rules Mar 17 17:38:40.927462 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:38:40.927855 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:38:40.929724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:38:40.929975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:38:40.932403 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:38:40.932660 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:38:40.942429 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:38:40.942698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:38:40.950637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:38:40.955748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:38:40.962270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:38:40.965233 systemd-resolved[1460]: Positive Trust Anchors: Mar 17 17:38:40.965253 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:38:40.965301 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:38:40.994654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:38:40.997964 systemd-resolved[1460]: Defaulting to hostname 'linux'. Mar 17 17:38:40.999153 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:38:41.000339 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:38:41.002717 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:38:41.004749 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:38:41.006699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:38:41.006936 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:38:41.008724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:38:41.008968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:38:41.011036 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:38:41.011277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:38:41.015908 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:38:41.043739 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:38:41.052478 systemd[1]: Reached target network.target - Network. Mar 17 17:38:41.066625 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:38:41.067988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:38:41.075596 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:38:41.077452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:38:41.079325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:38:41.083931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:38:41.104429 augenrules[1517]: /sbin/augenrules: No change Mar 17 17:38:41.104716 augenrules[1536]: No rules Mar 17 17:38:41.107773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:38:41.112794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:38:41.114440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:38:41.114704 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:38:41.114960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:38:41.116938 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:38:41.117494 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:38:41.119332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:38:41.119565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:38:41.121253 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:38:41.121463 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:38:41.123065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:38:41.123294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:38:41.125105 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:38:41.125337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:38:41.128884 systemd[1]: Finished ensure-sysext.service. Mar 17 17:38:41.134597 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:38:41.134671 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:38:41.139332 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:38:41.214524 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:38:41.927378 systemd-resolved[1460]: Clock change detected. Flushing caches. Mar 17 17:38:41.927421 systemd-timesyncd[1556]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:38:41.927473 systemd-timesyncd[1556]: Initial clock synchronization to Mon 2025-03-17 17:38:41.927317 UTC. Mar 17 17:38:41.928562 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:38:41.929802 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:38:41.931131 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:38:41.932441 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:38:41.933765 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:38:41.933793 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:38:41.934740 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:38:41.935978 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:38:41.937313 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:38:41.938598 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:38:41.940336 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:38:41.943775 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:38:41.946242 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:38:41.949606 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:38:41.951009 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:38:41.952252 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:38:41.953669 systemd[1]: System is tainted: cgroupsv1 Mar 17 17:38:41.953721 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:38:41.953752 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:38:41.955790 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:38:41.959409 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:38:41.963448 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:38:41.968454 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:38:41.971699 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:38:41.975730 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:38:41.980492 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:38:41.985837 jq[1562]: false Mar 17 17:38:41.989186 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:38:41.993464 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:38:41.999389 extend-filesystems[1564]: Found loop3 Mar 17 17:38:41.999389 extend-filesystems[1564]: Found loop4 Mar 17 17:38:42.001616 extend-filesystems[1564]: Found loop5 Mar 17 17:38:42.001616 extend-filesystems[1564]: Found sr0 Mar 17 17:38:42.001616 extend-filesystems[1564]: Found vda Mar 17 17:38:42.001616 extend-filesystems[1564]: Found vda1 Mar 17 17:38:42.001616 extend-filesystems[1564]: Found vda2 Mar 17 17:38:42.001616 extend-filesystems[1564]: Found vda3 Mar 17 17:38:42.001616 extend-filesystems[1564]: Found usr Mar 17 17:38:42.001616 extend-filesystems[1564]: Found vda4 Mar 17 17:38:42.001616 extend-filesystems[1564]: Found vda6 Mar 17 17:38:42.001616 extend-filesystems[1564]: Found vda7 Mar 17 17:38:42.001616 extend-filesystems[1564]: Found vda9 Mar 17 17:38:42.001616 extend-filesystems[1564]: Checking size of /dev/vda9 Mar 17 17:38:42.003497 dbus-daemon[1561]: [system] SELinux support is enabled Mar 17 17:38:42.004594 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:38:42.015107 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:38:42.019994 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:38:42.020103 extend-filesystems[1564]: Resized partition /dev/vda9 Mar 17 17:38:42.044418 extend-filesystems[1585]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:38:42.049956 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:38:42.054973 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:38:42.055409 systemd-networkd[1244]: eth0: Gained IPv6LL Mar 17 17:38:42.058254 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:38:42.061235 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1242) Mar 17 17:38:42.068610 update_engine[1584]: I20250317 17:38:42.068512 1584 main.cc:92] Flatcar Update Engine starting Mar 17 17:38:42.072407 update_engine[1584]: I20250317 17:38:42.069921 1584 update_check_scheduler.cc:74] Next update check in 7m18s Mar 17 17:38:42.070895 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:38:42.075500 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:38:42.075827 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:38:42.076167 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:38:42.076813 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:38:42.078824 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:38:42.079126 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:38:42.089887 jq[1588]: true Mar 17 17:38:42.111253 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:38:42.147971 tar[1593]: linux-amd64/helm Mar 17 17:38:42.113803 (ntainerd)[1595]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:38:42.148619 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:38:42.148619 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:38:42.148619 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:38:42.120188 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:38:42.153308 jq[1600]: true Mar 17 17:38:42.153491 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Mar 17 17:38:42.121887 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:38:42.132622 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:38:42.138384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:42.142746 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:38:42.144635 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:38:42.144665 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:38:42.146429 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:38:42.146450 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:38:42.149819 systemd-logind[1578]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:38:42.149839 systemd-logind[1578]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:38:42.150096 systemd-logind[1578]: New seat seat0. Mar 17 17:38:42.153201 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:38:42.155979 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:38:42.158489 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:38:42.162022 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:38:42.165373 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:38:42.205063 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:38:42.228418 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:38:42.228782 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:38:42.232122 bash[1637]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:38:42.235380 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:38:42.240984 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:38:42.241412 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:38:42.261185 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:38:42.309336 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:38:42.343212 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:38:42.366944 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:38:42.384353 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:38:42.385866 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:38:42.395601 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:38:42.415427 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:38:42.429571 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:38:42.432105 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:38:42.433451 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:38:42.553911 containerd[1595]: time="2025-03-17T17:38:42.553740576Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:38:42.595185 containerd[1595]: time="2025-03-17T17:38:42.595043439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:42.598432 containerd[1595]: time="2025-03-17T17:38:42.598369335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:38:42.598432 containerd[1595]: time="2025-03-17T17:38:42.598416684Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:38:42.598600 containerd[1595]: time="2025-03-17T17:38:42.598441711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:38:42.598752 containerd[1595]: time="2025-03-17T17:38:42.598722858Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:38:42.598797 containerd[1595]: time="2025-03-17T17:38:42.598751121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:42.598868 containerd[1595]: time="2025-03-17T17:38:42.598842612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:38:42.598868 containerd[1595]: time="2025-03-17T17:38:42.598863602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:42.599239 containerd[1595]: time="2025-03-17T17:38:42.599194342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:38:42.599239 containerd[1595]: time="2025-03-17T17:38:42.599218607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:42.599304 containerd[1595]: time="2025-03-17T17:38:42.599251489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:38:42.599304 containerd[1595]: time="2025-03-17T17:38:42.599264053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:42.599414 containerd[1595]: time="2025-03-17T17:38:42.599388726Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:42.599745 containerd[1595]: time="2025-03-17T17:38:42.599711682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:38:42.600661 containerd[1595]: time="2025-03-17T17:38:42.599952493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:38:42.600661 containerd[1595]: time="2025-03-17T17:38:42.599971118Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:38:42.600661 containerd[1595]: time="2025-03-17T17:38:42.600120188Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:38:42.600661 containerd[1595]: time="2025-03-17T17:38:42.600198475Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:38:42.611825 containerd[1595]: time="2025-03-17T17:38:42.611761395Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:38:42.611969 containerd[1595]: time="2025-03-17T17:38:42.611840513Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:38:42.611969 containerd[1595]: time="2025-03-17T17:38:42.611870249Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:38:42.611969 containerd[1595]: time="2025-03-17T17:38:42.611886880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:38:42.611969 containerd[1595]: time="2025-03-17T17:38:42.611903371Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:38:42.612128 containerd[1595]: time="2025-03-17T17:38:42.612105651Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:38:42.612611 containerd[1595]: time="2025-03-17T17:38:42.612589738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:38:42.612734 containerd[1595]: time="2025-03-17T17:38:42.612712088Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:38:42.612773 containerd[1595]: time="2025-03-17T17:38:42.612731474Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:38:42.612773 containerd[1595]: time="2025-03-17T17:38:42.612745971Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:38:42.612773 containerd[1595]: time="2025-03-17T17:38:42.612760729Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:38:42.612835 containerd[1595]: time="2025-03-17T17:38:42.612774845Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:38:42.612835 containerd[1595]: time="2025-03-17T17:38:42.612791737Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:38:42.612835 containerd[1595]: time="2025-03-17T17:38:42.612811173Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:38:42.612835 containerd[1595]: time="2025-03-17T17:38:42.612826041Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:38:42.612924 containerd[1595]: time="2025-03-17T17:38:42.612840528Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:38:42.612924 containerd[1595]: time="2025-03-17T17:38:42.612853863Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:38:42.612924 containerd[1595]: time="2025-03-17T17:38:42.612867830Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:38:42.612924 containerd[1595]: time="2025-03-17T17:38:42.612889230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.612924 containerd[1595]: time="2025-03-17T17:38:42.612903687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.612924 containerd[1595]: time="2025-03-17T17:38:42.612918174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613060 containerd[1595]: time="2025-03-17T17:38:42.612933863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613060 containerd[1595]: time="2025-03-17T17:38:42.612946738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613060 containerd[1595]: time="2025-03-17T17:38:42.612961435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613060 containerd[1595]: time="2025-03-17T17:38:42.612974630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613060 containerd[1595]: time="2025-03-17T17:38:42.612989458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613060 containerd[1595]: time="2025-03-17T17:38:42.613002863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613060 containerd[1595]: time="2025-03-17T17:38:42.613018111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613060 containerd[1595]: time="2025-03-17T17:38:42.613030224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613060 containerd[1595]: time="2025-03-17T17:38:42.613045282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613060 containerd[1595]: time="2025-03-17T17:38:42.613060992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613076491Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613097550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613112118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613124601Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613179183Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613199812Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613211885Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613239927Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613252381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613278770Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613291825Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:38:42.613303 containerd[1595]: time="2025-03-17T17:38:42.613303156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:38:42.613753 containerd[1595]: time="2025-03-17T17:38:42.613692286Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:38:42.613753 containerd[1595]: time="2025-03-17T17:38:42.613750435Z" level=info msg="Connect containerd service" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.613793385Z" level=info msg="using legacy CRI server" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.613801570Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.613940932Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.614611339Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.614926059Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.614975852Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.615015727Z" level=info msg="Start subscribing containerd event" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.615047587Z" level=info msg="Start recovering state" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.615100466Z" level=info msg="Start event monitor" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.615118099Z" level=info msg="Start snapshots syncer" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.615126595Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.615133578Z" level=info msg="Start streaming server" Mar 17 17:38:42.618765 containerd[1595]: time="2025-03-17T17:38:42.615187850Z" level=info msg="containerd successfully booted in 0.062846s" Mar 17 17:38:42.615581 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:38:42.870692 tar[1593]: linux-amd64/LICENSE Mar 17 17:38:42.870808 tar[1593]: linux-amd64/README.md Mar 17 17:38:42.888646 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:38:43.382967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:43.384737 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:38:43.387356 systemd[1]: Startup finished in 6.876s (kernel) + 5.007s (userspace) = 11.884s. Mar 17 17:38:43.398872 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:38:43.972687 kubelet[1697]: E0317 17:38:43.972575 1697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:38:43.977162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:38:43.977528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:38:50.745672 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:38:50.758535 systemd[1]: Started sshd@0-10.0.0.27:22-10.0.0.1:38874.service - OpenSSH per-connection server daemon (10.0.0.1:38874). Mar 17 17:38:50.799093 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 38874 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:38:50.801138 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:50.809786 systemd-logind[1578]: New session 1 of user core. Mar 17 17:38:50.811145 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:38:50.817442 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:38:50.829419 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:38:50.844552 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:38:50.847797 (systemd)[1717]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:38:50.946152 systemd[1717]: Queued start job for default target default.target. Mar 17 17:38:50.946681 systemd[1717]: Created slice app.slice - User Application Slice. Mar 17 17:38:50.946708 systemd[1717]: Reached target paths.target - Paths. Mar 17 17:38:50.946724 systemd[1717]: Reached target timers.target - Timers. Mar 17 17:38:50.953325 systemd[1717]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:38:50.959774 systemd[1717]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:38:50.959843 systemd[1717]: Reached target sockets.target - Sockets. Mar 17 17:38:50.959856 systemd[1717]: Reached target basic.target - Basic System. Mar 17 17:38:50.959896 systemd[1717]: Reached target default.target - Main User Target. Mar 17 17:38:50.959927 systemd[1717]: Startup finished in 105ms. Mar 17 17:38:50.960514 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:38:50.962167 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:38:51.017487 systemd[1]: Started sshd@1-10.0.0.27:22-10.0.0.1:38886.service - OpenSSH per-connection server daemon (10.0.0.1:38886). Mar 17 17:38:51.052272 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 38886 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:38:51.053623 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:51.057482 systemd-logind[1578]: New session 2 of user core. Mar 17 17:38:51.073487 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:38:51.125975 sshd[1732]: Connection closed by 10.0.0.1 port 38886 Mar 17 17:38:51.126292 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:51.134492 systemd[1]: Started sshd@2-10.0.0.27:22-10.0.0.1:38900.service - OpenSSH per-connection server daemon (10.0.0.1:38900). Mar 17 17:38:51.135065 systemd[1]: sshd@1-10.0.0.27:22-10.0.0.1:38886.service: Deactivated successfully. Mar 17 17:38:51.136782 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:38:51.137447 systemd-logind[1578]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:38:51.138658 systemd-logind[1578]: Removed session 2. Mar 17 17:38:51.165717 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 38900 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:38:51.167142 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:51.171572 systemd-logind[1578]: New session 3 of user core. Mar 17 17:38:51.180483 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:38:51.230165 sshd[1740]: Connection closed by 10.0.0.1 port 38900 Mar 17 17:38:51.230556 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:51.238432 systemd[1]: Started sshd@3-10.0.0.27:22-10.0.0.1:38906.service - OpenSSH per-connection server daemon (10.0.0.1:38906). Mar 17 17:38:51.238918 systemd[1]: sshd@2-10.0.0.27:22-10.0.0.1:38900.service: Deactivated successfully. Mar 17 17:38:51.241036 systemd-logind[1578]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:38:51.242147 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:38:51.243408 systemd-logind[1578]: Removed session 3. Mar 17 17:38:51.272790 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 38906 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:38:51.274606 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:51.278206 systemd-logind[1578]: New session 4 of user core. Mar 17 17:38:51.297463 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:38:51.352030 sshd[1748]: Connection closed by 10.0.0.1 port 38906 Mar 17 17:38:51.352566 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:51.362497 systemd[1]: Started sshd@4-10.0.0.27:22-10.0.0.1:38916.service - OpenSSH per-connection server daemon (10.0.0.1:38916). Mar 17 17:38:51.363095 systemd[1]: sshd@3-10.0.0.27:22-10.0.0.1:38906.service: Deactivated successfully. Mar 17 17:38:51.365784 systemd-logind[1578]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:38:51.367022 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:38:51.368148 systemd-logind[1578]: Removed session 4. Mar 17 17:38:51.394654 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 38916 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:38:51.396216 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:51.400318 systemd-logind[1578]: New session 5 of user core. Mar 17 17:38:51.411592 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:38:51.472441 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:38:51.472885 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:51.493551 sudo[1757]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:51.495274 sshd[1756]: Connection closed by 10.0.0.1 port 38916 Mar 17 17:38:51.495706 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:51.502492 systemd[1]: Started sshd@5-10.0.0.27:22-10.0.0.1:38930.service - OpenSSH per-connection server daemon (10.0.0.1:38930). Mar 17 17:38:51.503026 systemd[1]: sshd@4-10.0.0.27:22-10.0.0.1:38916.service: Deactivated successfully. Mar 17 17:38:51.505009 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:38:51.505691 systemd-logind[1578]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:38:51.506943 systemd-logind[1578]: Removed session 5. Mar 17 17:38:51.538014 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 38930 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:38:51.539631 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:51.544084 systemd-logind[1578]: New session 6 of user core. Mar 17 17:38:51.550632 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:38:51.604971 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:38:51.605364 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:51.609578 sudo[1767]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:51.616283 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:38:51.616676 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:51.634550 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:38:51.666308 augenrules[1789]: No rules Mar 17 17:38:51.668179 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:38:51.668551 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:38:51.669848 sudo[1766]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:51.671376 sshd[1765]: Connection closed by 10.0.0.1 port 38930 Mar 17 17:38:51.671747 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:51.687476 systemd[1]: Started sshd@6-10.0.0.27:22-10.0.0.1:38946.service - OpenSSH per-connection server daemon (10.0.0.1:38946). Mar 17 17:38:51.688082 systemd[1]: sshd@5-10.0.0.27:22-10.0.0.1:38930.service: Deactivated successfully. Mar 17 17:38:51.689849 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:38:51.690533 systemd-logind[1578]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:38:51.691827 systemd-logind[1578]: Removed session 6. Mar 17 17:38:51.719326 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 38946 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:38:51.720902 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:51.725206 systemd-logind[1578]: New session 7 of user core. Mar 17 17:38:51.735583 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:38:51.790616 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:38:51.791018 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:38:52.450532 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:38:52.450742 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:38:53.156531 dockerd[1822]: time="2025-03-17T17:38:53.156450174Z" level=info msg="Starting up" Mar 17 17:38:53.981737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:38:53.989506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:54.141256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:54.146668 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:38:54.335407 dockerd[1822]: time="2025-03-17T17:38:54.331976746Z" level=info msg="Loading containers: start." Mar 17 17:38:54.390831 kubelet[1858]: E0317 17:38:54.389881 1858 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:38:54.398881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:38:54.399178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:38:54.622270 kernel: Initializing XFRM netlink socket Mar 17 17:38:54.719823 systemd-networkd[1244]: docker0: Link UP Mar 17 17:38:54.764394 dockerd[1822]: time="2025-03-17T17:38:54.764334196Z" level=info msg="Loading containers: done." Mar 17 17:38:54.791104 dockerd[1822]: time="2025-03-17T17:38:54.790949670Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:38:54.791341 dockerd[1822]: time="2025-03-17T17:38:54.791261274Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:38:54.791539 dockerd[1822]: time="2025-03-17T17:38:54.791508027Z" level=info msg="Daemon has completed initialization" Mar 17 17:38:54.838677 dockerd[1822]: time="2025-03-17T17:38:54.838557925Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:38:54.838829 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:38:56.194260 containerd[1595]: time="2025-03-17T17:38:56.194200825Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:38:56.880776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633911403.mount: Deactivated successfully. Mar 17 17:38:58.286495 containerd[1595]: time="2025-03-17T17:38:58.286400850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:58.289555 containerd[1595]: time="2025-03-17T17:38:58.289454405Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 17 17:38:58.293438 containerd[1595]: time="2025-03-17T17:38:58.293341794Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:58.309251 containerd[1595]: time="2025-03-17T17:38:58.305468201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:58.309251 containerd[1595]: time="2025-03-17T17:38:58.306902460Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 2.112642344s" Mar 17 17:38:58.309251 containerd[1595]: time="2025-03-17T17:38:58.306941965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 17:38:58.350084 containerd[1595]: time="2025-03-17T17:38:58.350016960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:39:02.148663 containerd[1595]: time="2025-03-17T17:39:02.148544238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:02.156048 containerd[1595]: time="2025-03-17T17:39:02.154921124Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 17 17:39:02.165478 containerd[1595]: time="2025-03-17T17:39:02.165364345Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:02.172386 containerd[1595]: time="2025-03-17T17:39:02.172152884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:02.174392 containerd[1595]: time="2025-03-17T17:39:02.172969054Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 3.822895307s" Mar 17 17:39:02.174392 containerd[1595]: time="2025-03-17T17:39:02.173027985Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 17:39:02.242265 containerd[1595]: time="2025-03-17T17:39:02.242195845Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:39:04.425460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:39:04.439563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:39:04.655665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:39:04.661883 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:39:04.872311 kubelet[2138]: E0317 17:39:04.872048 2138 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:39:04.876967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:39:04.877353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:39:05.578545 containerd[1595]: time="2025-03-17T17:39:05.577069492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:05.581042 containerd[1595]: time="2025-03-17T17:39:05.580407541Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 17 17:39:05.583096 containerd[1595]: time="2025-03-17T17:39:05.583017965Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:05.588790 containerd[1595]: time="2025-03-17T17:39:05.588713404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:05.592207 containerd[1595]: time="2025-03-17T17:39:05.592145740Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 3.349882478s" Mar 17 17:39:05.592207 containerd[1595]: time="2025-03-17T17:39:05.592193219Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 17:39:05.634869 containerd[1595]: time="2025-03-17T17:39:05.634719566Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:39:07.561323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55392650.mount: Deactivated successfully. Mar 17 17:39:09.611391 containerd[1595]: time="2025-03-17T17:39:09.610289769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:09.619052 containerd[1595]: time="2025-03-17T17:39:09.618928356Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 17 17:39:09.644453 containerd[1595]: time="2025-03-17T17:39:09.644366973Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:09.656515 containerd[1595]: time="2025-03-17T17:39:09.656349360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:09.657441 containerd[1595]: time="2025-03-17T17:39:09.657131427Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 4.022356648s" Mar 17 17:39:09.657441 containerd[1595]: time="2025-03-17T17:39:09.657170530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:39:09.692317 containerd[1595]: time="2025-03-17T17:39:09.692201193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:39:11.028344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580467226.mount: Deactivated successfully. Mar 17 17:39:14.925486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:39:14.938500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:39:15.110979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:39:15.117239 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:39:15.446305 kubelet[2205]: E0317 17:39:15.446150 2205 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:39:15.451318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:39:15.451652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:39:16.782306 containerd[1595]: time="2025-03-17T17:39:16.782245287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:16.808093 containerd[1595]: time="2025-03-17T17:39:16.807988894Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 17 17:39:16.811908 containerd[1595]: time="2025-03-17T17:39:16.811874547Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:16.824303 containerd[1595]: time="2025-03-17T17:39:16.824204471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:16.825735 containerd[1595]: time="2025-03-17T17:39:16.825643266Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 7.133348196s" Mar 17 17:39:16.825735 containerd[1595]: time="2025-03-17T17:39:16.825714583Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:39:16.853933 containerd[1595]: time="2025-03-17T17:39:16.853880182Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:39:18.451301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864161032.mount: Deactivated successfully. Mar 17 17:39:18.518106 containerd[1595]: time="2025-03-17T17:39:18.517881410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:18.522789 containerd[1595]: time="2025-03-17T17:39:18.522561734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 17 17:39:18.527287 containerd[1595]: time="2025-03-17T17:39:18.525585395Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:18.537367 containerd[1595]: time="2025-03-17T17:39:18.537203933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:18.540542 containerd[1595]: time="2025-03-17T17:39:18.538994023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.685063094s" Mar 17 17:39:18.540542 containerd[1595]: time="2025-03-17T17:39:18.539831973Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 17:39:18.617263 containerd[1595]: time="2025-03-17T17:39:18.616506643Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:39:19.522746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232280410.mount: Deactivated successfully. Mar 17 17:39:25.675508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:39:25.703129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:39:26.569250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:39:26.583070 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:39:26.908360 kubelet[2308]: E0317 17:39:26.907966 2308 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:39:26.916051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:39:26.916358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:39:27.505010 update_engine[1584]: I20250317 17:39:27.504909 1584 update_attempter.cc:509] Updating boot flags... Mar 17 17:39:28.759297 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2326) Mar 17 17:39:29.027303 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2324) Mar 17 17:39:29.122945 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2324) Mar 17 17:39:30.998596 containerd[1595]: time="2025-03-17T17:39:30.998499361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:31.032663 containerd[1595]: time="2025-03-17T17:39:31.032568922Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 17 17:39:31.064436 containerd[1595]: time="2025-03-17T17:39:31.064345967Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:31.102034 containerd[1595]: time="2025-03-17T17:39:31.101969951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:31.103200 containerd[1595]: time="2025-03-17T17:39:31.103129267Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 12.486565857s" Mar 17 17:39:31.103266 containerd[1595]: time="2025-03-17T17:39:31.103204150Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 17:39:34.569022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:39:34.583725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:39:34.612534 systemd[1]: Reloading requested from client PID 2417 ('systemctl') (unit session-7.scope)... Mar 17 17:39:34.612554 systemd[1]: Reloading... Mar 17 17:39:34.721285 zram_generator::config[2462]: No configuration found. Mar 17 17:39:35.151982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:39:35.259264 systemd[1]: Reloading finished in 645 ms. Mar 17 17:39:35.319447 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:39:35.319564 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:39:35.320055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:39:35.322842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:39:35.514448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:39:35.520653 (kubelet)[2516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:39:35.571432 kubelet[2516]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:39:35.571432 kubelet[2516]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:39:35.571432 kubelet[2516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:39:35.572949 kubelet[2516]: I0317 17:39:35.572852 2516 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:39:35.875332 kubelet[2516]: I0317 17:39:35.875162 2516 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:39:35.875332 kubelet[2516]: I0317 17:39:35.875204 2516 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:39:35.875819 kubelet[2516]: I0317 17:39:35.875501 2516 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:39:36.427308 kubelet[2516]: I0317 17:39:36.427249 2516 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:39:36.431093 kubelet[2516]: E0317 17:39:36.429569 2516 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:36.444479 kubelet[2516]: I0317 17:39:36.444420 2516 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:39:36.445009 kubelet[2516]: I0317 17:39:36.444955 2516 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:39:36.445191 kubelet[2516]: I0317 17:39:36.444992 2516 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:39:36.445191 kubelet[2516]: I0317 17:39:36.445192 2516 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:39:36.445385 kubelet[2516]: I0317 17:39:36.445202 2516 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:39:36.445385 kubelet[2516]: I0317 17:39:36.445372 2516 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:39:36.446410 kubelet[2516]: I0317 17:39:36.446376 2516 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:39:36.446410 kubelet[2516]: I0317 17:39:36.446395 2516 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:39:36.446492 kubelet[2516]: I0317 17:39:36.446420 2516 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:39:36.446492 kubelet[2516]: I0317 17:39:36.446440 2516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:39:36.448298 kubelet[2516]: W0317 17:39:36.448177 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:36.448298 kubelet[2516]: E0317 17:39:36.448297 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:36.448957 kubelet[2516]: W0317 17:39:36.448895 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:36.448957 kubelet[2516]: E0317 17:39:36.448960 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:36.475076 kubelet[2516]: I0317 17:39:36.475030 2516 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:39:36.477638 kubelet[2516]: I0317 17:39:36.477595 2516 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:39:36.477787 kubelet[2516]: W0317 17:39:36.477675 2516 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:39:36.478559 kubelet[2516]: I0317 17:39:36.478428 2516 server.go:1264] "Started kubelet" Mar 17 17:39:36.478750 kubelet[2516]: I0317 17:39:36.478614 2516 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:39:36.478786 kubelet[2516]: I0317 17:39:36.478724 2516 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:39:36.479122 kubelet[2516]: I0317 17:39:36.479090 2516 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:39:36.479723 kubelet[2516]: I0317 17:39:36.479706 2516 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:39:36.481632 kubelet[2516]: I0317 17:39:36.481608 2516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:39:36.493705 kubelet[2516]: I0317 17:39:36.493649 2516 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:39:36.494298 kubelet[2516]: I0317 17:39:36.494091 2516 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:39:36.494298 kubelet[2516]: I0317 17:39:36.494161 2516 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:39:36.494298 kubelet[2516]: E0317 17:39:36.494285 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="200ms" Mar 17 17:39:36.494927 kubelet[2516]: I0317 17:39:36.494802 2516 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:39:36.495843 kubelet[2516]: W0317 17:39:36.495777 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:36.495843 kubelet[2516]: E0317 17:39:36.495831 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:36.496306 kubelet[2516]: I0317 17:39:36.496264 2516 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:39:36.496306 kubelet[2516]: I0317 17:39:36.496278 2516 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:39:36.527635 kubelet[2516]: E0317 17:39:36.527540 2516 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:39:36.540927 kubelet[2516]: I0317 17:39:36.540848 2516 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:39:36.541747 kubelet[2516]: E0317 17:39:36.541554 2516 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.27:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.27:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da7d7dfe54326 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:39:36.478401318 +0000 UTC m=+0.952838620,LastTimestamp:2025-03-17 17:39:36.478401318 +0000 UTC m=+0.952838620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:39:36.543399 kubelet[2516]: I0317 17:39:36.543354 2516 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:39:36.544697 kubelet[2516]: I0317 17:39:36.544639 2516 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:39:36.544697 kubelet[2516]: I0317 17:39:36.544713 2516 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:39:36.544841 kubelet[2516]: E0317 17:39:36.544773 2516 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:39:36.546878 kubelet[2516]: W0317 17:39:36.546795 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:36.546968 kubelet[2516]: E0317 17:39:36.546897 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:36.559126 kubelet[2516]: I0317 17:39:36.559084 2516 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:39:36.559126 kubelet[2516]: I0317 17:39:36.559117 2516 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:39:36.559300 kubelet[2516]: I0317 17:39:36.559150 2516 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:39:36.595872 kubelet[2516]: I0317 17:39:36.595758 2516 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:39:36.596465 kubelet[2516]: E0317 17:39:36.596338 2516 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Mar 17 17:39:36.645593 kubelet[2516]: E0317 17:39:36.645479 2516 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:39:36.695732 kubelet[2516]: E0317 17:39:36.695523 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="400ms" Mar 17 17:39:36.701299 kubelet[2516]: I0317 17:39:36.701243 2516 policy_none.go:49] "None policy: Start" Mar 17 17:39:36.702533 kubelet[2516]: I0317 17:39:36.702491 2516 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:39:36.702533 kubelet[2516]: I0317 17:39:36.702538 2516 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:39:36.807741 kubelet[2516]: I0317 17:39:36.807267 2516 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:39:36.807741 kubelet[2516]: E0317 17:39:36.807661 2516 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Mar 17 17:39:36.820827 kubelet[2516]: I0317 17:39:36.819391 2516 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:39:36.820827 kubelet[2516]: I0317 17:39:36.819725 2516 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:39:36.820827 kubelet[2516]: I0317 17:39:36.819878 2516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:39:36.825026 kubelet[2516]: E0317 17:39:36.824980 2516 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:39:36.846469 kubelet[2516]: I0317 17:39:36.846339 2516 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:39:36.848129 kubelet[2516]: I0317 17:39:36.848089 2516 topology_manager.go:215] "Topology Admit Handler" podUID="cfef04c846d675d91569400ca405e82a" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:39:36.849163 kubelet[2516]: I0317 17:39:36.849121 2516 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:39:36.895249 kubelet[2516]: I0317 17:39:36.895164 2516 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:39:36.996394 kubelet[2516]: I0317 17:39:36.996191 2516 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:39:36.996394 kubelet[2516]: I0317 17:39:36.996277 2516 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:39:36.996394 kubelet[2516]: I0317 17:39:36.996302 2516 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfef04c846d675d91569400ca405e82a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cfef04c846d675d91569400ca405e82a\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:39:36.996394 kubelet[2516]: I0317 17:39:36.996320 2516 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:39:36.996394 kubelet[2516]: I0317 17:39:36.996341 2516 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfef04c846d675d91569400ca405e82a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cfef04c846d675d91569400ca405e82a\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:39:36.996633 kubelet[2516]: I0317 17:39:36.996421 2516 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:39:36.996633 kubelet[2516]: I0317 17:39:36.996491 2516 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:39:36.996633 kubelet[2516]: I0317 17:39:36.996569 2516 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfef04c846d675d91569400ca405e82a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cfef04c846d675d91569400ca405e82a\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:39:37.097181 kubelet[2516]: E0317 17:39:37.097114 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="800ms" Mar 17 17:39:37.154825 kubelet[2516]: E0317 17:39:37.154744 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:37.155586 containerd[1595]: time="2025-03-17T17:39:37.155527142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 17:39:37.156736 kubelet[2516]: E0317 17:39:37.156700 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:37.156828 kubelet[2516]: E0317 17:39:37.156760 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:37.157265 containerd[1595]: time="2025-03-17T17:39:37.157204930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cfef04c846d675d91569400ca405e82a,Namespace:kube-system,Attempt:0,}" Mar 17 17:39:37.157307 containerd[1595]: time="2025-03-17T17:39:37.157276125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 17:39:37.209324 kubelet[2516]: I0317 17:39:37.209248 2516 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:39:37.209659 kubelet[2516]: E0317 17:39:37.209617 2516 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Mar 17 17:39:37.468268 kubelet[2516]: W0317 17:39:37.468139 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:37.468450 kubelet[2516]: E0317 17:39:37.468287 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:37.782072 kubelet[2516]: W0317 17:39:37.781918 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:37.782072 kubelet[2516]: E0317 17:39:37.782000 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:37.831186 kubelet[2516]: W0317 17:39:37.831130 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:37.831186 kubelet[2516]: E0317 17:39:37.831173 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:37.898110 kubelet[2516]: E0317 17:39:37.898012 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="1.6s" Mar 17 17:39:37.965251 kubelet[2516]: W0317 17:39:37.965116 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:37.965251 kubelet[2516]: E0317 17:39:37.965247 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:38.033193 kubelet[2516]: I0317 17:39:38.032946 2516 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:39:38.033627 kubelet[2516]: E0317 17:39:38.033553 2516 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Mar 17 17:39:38.496809 kubelet[2516]: E0317 17:39:38.496734 2516 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:38.713301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967102642.mount: Deactivated successfully. Mar 17 17:39:38.782045 containerd[1595]: time="2025-03-17T17:39:38.781825473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:39:38.800382 containerd[1595]: time="2025-03-17T17:39:38.800280804Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:39:38.825192 containerd[1595]: time="2025-03-17T17:39:38.825101943Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:39:38.830732 containerd[1595]: time="2025-03-17T17:39:38.830679532Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:39:38.838563 containerd[1595]: time="2025-03-17T17:39:38.838408394Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:39:38.843097 containerd[1595]: time="2025-03-17T17:39:38.843025950Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:39:38.848029 containerd[1595]: time="2025-03-17T17:39:38.847954896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:39:38.853326 containerd[1595]: time="2025-03-17T17:39:38.853261524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:39:38.854421 containerd[1595]: time="2025-03-17T17:39:38.854373984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.698727827s" Mar 17 17:39:38.862616 containerd[1595]: time="2025-03-17T17:39:38.862546102Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.705241544s" Mar 17 17:39:38.865786 containerd[1595]: time="2025-03-17T17:39:38.865707229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.708319453s" Mar 17 17:39:39.295073 containerd[1595]: time="2025-03-17T17:39:39.292379651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:39.295073 containerd[1595]: time="2025-03-17T17:39:39.294507737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:39.295073 containerd[1595]: time="2025-03-17T17:39:39.294530370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:39.295073 containerd[1595]: time="2025-03-17T17:39:39.294823904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:39.297088 containerd[1595]: time="2025-03-17T17:39:39.296936230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:39.297088 containerd[1595]: time="2025-03-17T17:39:39.296989280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:39.297088 containerd[1595]: time="2025-03-17T17:39:39.297002264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:39.297327 containerd[1595]: time="2025-03-17T17:39:39.297282012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:39.369762 containerd[1595]: time="2025-03-17T17:39:39.365363565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:39.369934 containerd[1595]: time="2025-03-17T17:39:39.369725837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:39.369934 containerd[1595]: time="2025-03-17T17:39:39.369915365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:39.370242 containerd[1595]: time="2025-03-17T17:39:39.370188200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:39.432946 containerd[1595]: time="2025-03-17T17:39:39.432881325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cfef04c846d675d91569400ca405e82a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b22fc32ae2c51675e40f493480be6857d13622ed5c3ea691fcf525ff166bb3a\"" Mar 17 17:39:39.435797 kubelet[2516]: E0317 17:39:39.435770 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:39.439408 containerd[1595]: time="2025-03-17T17:39:39.439289357Z" level=info msg="CreateContainer within sandbox \"8b22fc32ae2c51675e40f493480be6857d13622ed5c3ea691fcf525ff166bb3a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:39:39.445924 containerd[1595]: time="2025-03-17T17:39:39.445887839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bd75f16f6e67afef9200b5002117424b80d10239c077f196179b584d50a2d3a\"" Mar 17 17:39:39.446681 kubelet[2516]: E0317 17:39:39.446659 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:39.449097 containerd[1595]: time="2025-03-17T17:39:39.448985725Z" level=info msg="CreateContainer within sandbox \"4bd75f16f6e67afef9200b5002117424b80d10239c077f196179b584d50a2d3a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:39:39.473119 containerd[1595]: time="2025-03-17T17:39:39.472933706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8b53cc84a0da8021ab57bfc1a7ebce1409162a632b677f13a4b502027b31d89\"" Mar 17 17:39:39.473743 kubelet[2516]: E0317 17:39:39.473707 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:39.476098 containerd[1595]: time="2025-03-17T17:39:39.476070946Z" level=info msg="CreateContainer within sandbox \"c8b53cc84a0da8021ab57bfc1a7ebce1409162a632b677f13a4b502027b31d89\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:39:39.498765 kubelet[2516]: E0317 17:39:39.498690 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="3.2s" Mar 17 17:39:39.635569 kubelet[2516]: I0317 17:39:39.635435 2516 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:39:39.636022 kubelet[2516]: E0317 17:39:39.635972 2516 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Mar 17 17:39:39.664916 kubelet[2516]: E0317 17:39:39.664762 2516 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.27:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.27:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da7d7dfe54326 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:39:36.478401318 +0000 UTC m=+0.952838620,LastTimestamp:2025-03-17 17:39:36.478401318 +0000 UTC m=+0.952838620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:39:39.889608 kubelet[2516]: W0317 17:39:39.889430 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:39.889608 kubelet[2516]: E0317 17:39:39.889514 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:39.899176 kubelet[2516]: W0317 17:39:39.899060 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:39.899176 kubelet[2516]: E0317 17:39:39.899169 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:40.051356 containerd[1595]: time="2025-03-17T17:39:40.051048083Z" level=info msg="CreateContainer within sandbox \"4bd75f16f6e67afef9200b5002117424b80d10239c077f196179b584d50a2d3a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c7ef0b0ab1f013d1a39b05b3301d4f4ba9ad8a89462d0bac146d89ece301f009\"" Mar 17 17:39:40.053060 containerd[1595]: time="2025-03-17T17:39:40.053002890Z" level=info msg="StartContainer for \"c7ef0b0ab1f013d1a39b05b3301d4f4ba9ad8a89462d0bac146d89ece301f009\"" Mar 17 17:39:40.275796 containerd[1595]: time="2025-03-17T17:39:40.272427300Z" level=info msg="CreateContainer within sandbox \"c8b53cc84a0da8021ab57bfc1a7ebce1409162a632b677f13a4b502027b31d89\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b480a84ca2bb3f36ae9dcbf619769ec6907da1ca4019a89103eac4ce96f1101e\"" Mar 17 17:39:40.275796 containerd[1595]: time="2025-03-17T17:39:40.273082617Z" level=info msg="CreateContainer within sandbox \"8b22fc32ae2c51675e40f493480be6857d13622ed5c3ea691fcf525ff166bb3a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dbee84c9521c77b5791321ba1e7925a21ca9c7dcaa9e7206beaaf663379617b5\"" Mar 17 17:39:40.275796 containerd[1595]: time="2025-03-17T17:39:40.273280400Z" level=info msg="StartContainer for \"c7ef0b0ab1f013d1a39b05b3301d4f4ba9ad8a89462d0bac146d89ece301f009\" returns successfully" Mar 17 17:39:40.275796 containerd[1595]: time="2025-03-17T17:39:40.274263975Z" level=info msg="StartContainer for \"b480a84ca2bb3f36ae9dcbf619769ec6907da1ca4019a89103eac4ce96f1101e\"" Mar 17 17:39:40.277512 containerd[1595]: time="2025-03-17T17:39:40.277467968Z" level=info msg="StartContainer for \"dbee84c9521c77b5791321ba1e7925a21ca9c7dcaa9e7206beaaf663379617b5\"" Mar 17 17:39:40.571535 kubelet[2516]: E0317 17:39:40.570293 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:40.849950 kubelet[2516]: W0317 17:39:40.849576 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:40.849950 kubelet[2516]: E0317 17:39:40.849704 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:41.184372 kubelet[2516]: W0317 17:39:41.182131 2516 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:41.184372 kubelet[2516]: E0317 17:39:41.182243 2516 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Mar 17 17:39:41.298621 containerd[1595]: time="2025-03-17T17:39:41.298547730Z" level=info msg="StartContainer for \"b480a84ca2bb3f36ae9dcbf619769ec6907da1ca4019a89103eac4ce96f1101e\" returns successfully" Mar 17 17:39:41.397580 containerd[1595]: time="2025-03-17T17:39:41.397078490Z" level=info msg="StartContainer for \"dbee84c9521c77b5791321ba1e7925a21ca9c7dcaa9e7206beaaf663379617b5\" returns successfully" Mar 17 17:39:41.604846 kubelet[2516]: E0317 17:39:41.604122 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:41.609519 kubelet[2516]: E0317 17:39:41.609241 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:41.611834 kubelet[2516]: E0317 17:39:41.611706 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:42.605470 kubelet[2516]: E0317 17:39:42.605427 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:42.611804 kubelet[2516]: E0317 17:39:42.606894 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:42.841202 kubelet[2516]: I0317 17:39:42.841138 2516 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:39:43.607944 kubelet[2516]: E0317 17:39:43.607876 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:44.089619 kubelet[2516]: E0317 17:39:44.089530 2516 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 17:39:44.180111 kubelet[2516]: I0317 17:39:44.179967 2516 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:39:44.300963 kubelet[2516]: E0317 17:39:44.300107 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:44.400583 kubelet[2516]: E0317 17:39:44.400414 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:44.501101 kubelet[2516]: E0317 17:39:44.501004 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:44.601700 kubelet[2516]: E0317 17:39:44.601469 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:44.702129 kubelet[2516]: E0317 17:39:44.702022 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:44.803886 kubelet[2516]: E0317 17:39:44.803798 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:44.904680 kubelet[2516]: E0317 17:39:44.904614 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:45.005685 kubelet[2516]: E0317 17:39:45.005435 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:45.106625 kubelet[2516]: E0317 17:39:45.106545 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:45.218650 kubelet[2516]: E0317 17:39:45.216628 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:45.318031 kubelet[2516]: E0317 17:39:45.317777 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:45.424674 kubelet[2516]: E0317 17:39:45.422203 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:45.526393 kubelet[2516]: E0317 17:39:45.526325 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:45.628193 kubelet[2516]: E0317 17:39:45.627850 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:45.728938 kubelet[2516]: E0317 17:39:45.728856 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:45.814010 kubelet[2516]: E0317 17:39:45.813851 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:45.829871 kubelet[2516]: E0317 17:39:45.829810 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:45.930110 kubelet[2516]: E0317 17:39:45.930056 2516 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:39:46.458641 kubelet[2516]: I0317 17:39:46.458546 2516 apiserver.go:52] "Watching apiserver" Mar 17 17:39:46.494730 kubelet[2516]: I0317 17:39:46.494642 2516 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:39:48.033233 kubelet[2516]: E0317 17:39:48.033165 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:48.076776 systemd[1]: Reloading requested from client PID 2800 ('systemctl') (unit session-7.scope)... Mar 17 17:39:48.076797 systemd[1]: Reloading... Mar 17 17:39:48.163267 zram_generator::config[2842]: No configuration found. Mar 17 17:39:48.354268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:39:48.442912 systemd[1]: Reloading finished in 365 ms. Mar 17 17:39:48.493533 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:39:48.504092 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:39:48.504645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:39:48.522633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:39:48.691100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:39:48.696970 (kubelet)[2894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:39:48.746576 kubelet[2894]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:39:48.746576 kubelet[2894]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:39:48.746576 kubelet[2894]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:39:48.747086 kubelet[2894]: I0317 17:39:48.746621 2894 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:39:48.751595 kubelet[2894]: I0317 17:39:48.751551 2894 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:39:48.751595 kubelet[2894]: I0317 17:39:48.751578 2894 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:39:48.751864 kubelet[2894]: I0317 17:39:48.751838 2894 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:39:48.753238 kubelet[2894]: I0317 17:39:48.753188 2894 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:39:48.754439 kubelet[2894]: I0317 17:39:48.754256 2894 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:39:48.764914 kubelet[2894]: I0317 17:39:48.764858 2894 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:39:48.765509 kubelet[2894]: I0317 17:39:48.765462 2894 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:39:48.765704 kubelet[2894]: I0317 17:39:48.765497 2894 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:39:48.765844 kubelet[2894]: I0317 17:39:48.765716 2894 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:39:48.765844 kubelet[2894]: I0317 17:39:48.765726 2894 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:39:48.765844 kubelet[2894]: I0317 17:39:48.765770 2894 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:39:48.765948 kubelet[2894]: I0317 17:39:48.765871 2894 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:39:48.765948 kubelet[2894]: I0317 17:39:48.765882 2894 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:39:48.765948 kubelet[2894]: I0317 17:39:48.765907 2894 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:39:48.765948 kubelet[2894]: I0317 17:39:48.765925 2894 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:39:48.767941 kubelet[2894]: I0317 17:39:48.767654 2894 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:39:48.767941 kubelet[2894]: I0317 17:39:48.767828 2894 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:39:48.769958 kubelet[2894]: I0317 17:39:48.769929 2894 server.go:1264] "Started kubelet" Mar 17 17:39:48.773805 kubelet[2894]: I0317 17:39:48.773136 2894 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:39:48.775848 kubelet[2894]: E0317 17:39:48.775821 2894 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:39:48.775965 kubelet[2894]: I0317 17:39:48.775835 2894 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:39:48.776081 kubelet[2894]: I0317 17:39:48.776060 2894 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:39:48.776331 kubelet[2894]: I0317 17:39:48.776316 2894 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:39:48.776733 kubelet[2894]: I0317 17:39:48.776693 2894 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:39:48.779779 kubelet[2894]: I0317 17:39:48.779753 2894 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:39:48.782277 kubelet[2894]: I0317 17:39:48.780377 2894 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:39:48.782743 kubelet[2894]: I0317 17:39:48.782718 2894 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:39:48.783369 kubelet[2894]: I0317 17:39:48.780542 2894 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:39:48.783691 kubelet[2894]: I0317 17:39:48.783667 2894 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:39:48.784632 kubelet[2894]: I0317 17:39:48.784603 2894 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:39:48.785657 kubelet[2894]: I0317 17:39:48.785614 2894 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:39:48.786603 kubelet[2894]: I0317 17:39:48.786371 2894 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:39:48.786603 kubelet[2894]: I0317 17:39:48.786407 2894 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:39:48.786603 kubelet[2894]: I0317 17:39:48.786431 2894 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:39:48.786603 kubelet[2894]: E0317 17:39:48.786493 2894 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:39:48.852359 kubelet[2894]: I0317 17:39:48.852327 2894 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:39:48.852359 kubelet[2894]: I0317 17:39:48.852348 2894 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:39:48.852359 kubelet[2894]: I0317 17:39:48.852366 2894 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:39:48.852589 kubelet[2894]: I0317 17:39:48.852524 2894 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:39:48.852589 kubelet[2894]: I0317 17:39:48.852534 2894 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:39:48.852589 kubelet[2894]: I0317 17:39:48.852551 2894 policy_none.go:49] "None policy: Start" Mar 17 17:39:48.855316 kubelet[2894]: I0317 17:39:48.853193 2894 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:39:48.855316 kubelet[2894]: I0317 17:39:48.853235 2894 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:39:48.855316 kubelet[2894]: I0317 17:39:48.853429 2894 state_mem.go:75] "Updated machine memory state" Mar 17 17:39:48.855533 kubelet[2894]: I0317 17:39:48.855502 2894 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:39:48.855748 kubelet[2894]: I0317 17:39:48.855701 2894 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:39:48.855832 kubelet[2894]: I0317 17:39:48.855812 2894 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:39:48.880720 kubelet[2894]: I0317 17:39:48.880675 2894 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:39:48.885995 kubelet[2894]: I0317 17:39:48.885969 2894 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 17:39:48.886096 kubelet[2894]: I0317 17:39:48.886047 2894 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:39:48.886757 kubelet[2894]: I0317 17:39:48.886725 2894 topology_manager.go:215] "Topology Admit Handler" podUID="cfef04c846d675d91569400ca405e82a" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:39:48.886927 kubelet[2894]: I0317 17:39:48.886867 2894 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:39:48.886964 kubelet[2894]: I0317 17:39:48.886940 2894 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:39:48.902653 kubelet[2894]: E0317 17:39:48.902605 2894 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:39:48.977764 kubelet[2894]: I0317 17:39:48.977627 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfef04c846d675d91569400ca405e82a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cfef04c846d675d91569400ca405e82a\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:39:48.977764 kubelet[2894]: I0317 17:39:48.977665 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:39:48.977764 kubelet[2894]: I0317 17:39:48.977683 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:39:48.977764 kubelet[2894]: I0317 17:39:48.977700 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:39:48.977764 kubelet[2894]: I0317 17:39:48.977718 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:39:48.977986 kubelet[2894]: I0317 17:39:48.977732 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfef04c846d675d91569400ca405e82a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cfef04c846d675d91569400ca405e82a\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:39:48.977986 kubelet[2894]: I0317 17:39:48.977746 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfef04c846d675d91569400ca405e82a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cfef04c846d675d91569400ca405e82a\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:39:48.977986 kubelet[2894]: I0317 17:39:48.977764 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:39:48.977986 kubelet[2894]: I0317 17:39:48.977799 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:39:49.192971 kubelet[2894]: E0317 17:39:49.192924 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:49.196477 kubelet[2894]: E0317 17:39:49.196450 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:49.203946 kubelet[2894]: E0317 17:39:49.203917 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:49.768249 kubelet[2894]: I0317 17:39:49.767084 2894 apiserver.go:52] "Watching apiserver" Mar 17 17:39:49.776979 kubelet[2894]: I0317 17:39:49.776920 2894 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:39:49.796499 kubelet[2894]: E0317 17:39:49.796448 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:49.797845 kubelet[2894]: E0317 17:39:49.797801 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:49.854155 kubelet[2894]: E0317 17:39:49.854101 2894 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:39:49.854602 kubelet[2894]: E0317 17:39:49.854583 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:49.877744 kubelet[2894]: I0317 17:39:49.877373 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.877351794 podStartE2EDuration="1.877351794s" podCreationTimestamp="2025-03-17 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:39:49.876591343 +0000 UTC m=+1.171255212" watchObservedRunningTime="2025-03-17 17:39:49.877351794 +0000 UTC m=+1.172015653" Mar 17 17:39:50.209922 kubelet[2894]: I0317 17:39:50.209854 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.209814481 podStartE2EDuration="2.209814481s" podCreationTimestamp="2025-03-17 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:39:49.915458915 +0000 UTC m=+1.210122784" watchObservedRunningTime="2025-03-17 17:39:50.209814481 +0000 UTC m=+1.504478340" Mar 17 17:39:50.804254 kubelet[2894]: E0317 17:39:50.802854 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:53.737139 sudo[1802]: pam_unix(sudo:session): session closed for user root Mar 17 17:39:53.738799 sshd[1801]: Connection closed by 10.0.0.1 port 38946 Mar 17 17:39:53.739660 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:53.745599 systemd[1]: sshd@6-10.0.0.27:22-10.0.0.1:38946.service: Deactivated successfully. Mar 17 17:39:53.748477 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:39:53.749188 systemd-logind[1578]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:39:53.750531 systemd-logind[1578]: Removed session 7. Mar 17 17:39:56.159476 kubelet[2894]: E0317 17:39:56.159439 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:56.279647 kubelet[2894]: I0317 17:39:56.279579 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=9.27956194 podStartE2EDuration="9.27956194s" podCreationTimestamp="2025-03-17 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:39:50.210011551 +0000 UTC m=+1.504675410" watchObservedRunningTime="2025-03-17 17:39:56.27956194 +0000 UTC m=+7.574225800" Mar 17 17:39:56.442616 kubelet[2894]: E0317 17:39:56.442524 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:56.815352 kubelet[2894]: E0317 17:39:56.815010 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:56.815352 kubelet[2894]: E0317 17:39:56.815010 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:59.130152 kubelet[2894]: E0317 17:39:59.130094 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:39:59.820107 kubelet[2894]: E0317 17:39:59.820047 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:00.952248 kubelet[2894]: I0317 17:40:00.952120 2894 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:40:00.957388 containerd[1595]: time="2025-03-17T17:40:00.954145032Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:40:00.958237 kubelet[2894]: I0317 17:40:00.954702 2894 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:40:00.966515 kubelet[2894]: I0317 17:40:00.966351 2894 topology_manager.go:215] "Topology Admit Handler" podUID="4adcae25-7455-44ff-b677-f9d47dd18d96" podNamespace="kube-system" podName="kube-proxy-ptcrg" Mar 17 17:40:01.059093 kubelet[2894]: I0317 17:40:01.058927 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4adcae25-7455-44ff-b677-f9d47dd18d96-kube-proxy\") pod \"kube-proxy-ptcrg\" (UID: \"4adcae25-7455-44ff-b677-f9d47dd18d96\") " pod="kube-system/kube-proxy-ptcrg" Mar 17 17:40:01.059093 kubelet[2894]: I0317 17:40:01.058987 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4adcae25-7455-44ff-b677-f9d47dd18d96-xtables-lock\") pod \"kube-proxy-ptcrg\" (UID: \"4adcae25-7455-44ff-b677-f9d47dd18d96\") " pod="kube-system/kube-proxy-ptcrg" Mar 17 17:40:01.059093 kubelet[2894]: I0317 17:40:01.059012 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4adcae25-7455-44ff-b677-f9d47dd18d96-lib-modules\") pod \"kube-proxy-ptcrg\" (UID: \"4adcae25-7455-44ff-b677-f9d47dd18d96\") " pod="kube-system/kube-proxy-ptcrg" Mar 17 17:40:01.059093 kubelet[2894]: I0317 17:40:01.059035 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltk2q\" (UniqueName: \"kubernetes.io/projected/4adcae25-7455-44ff-b677-f9d47dd18d96-kube-api-access-ltk2q\") pod \"kube-proxy-ptcrg\" (UID: \"4adcae25-7455-44ff-b677-f9d47dd18d96\") " pod="kube-system/kube-proxy-ptcrg" Mar 17 17:40:01.165366 kubelet[2894]: E0317 17:40:01.165310 2894 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 17:40:01.165366 kubelet[2894]: E0317 17:40:01.165349 2894 projected.go:200] Error preparing data for projected volume kube-api-access-ltk2q for pod kube-system/kube-proxy-ptcrg: configmap "kube-root-ca.crt" not found Mar 17 17:40:01.165580 kubelet[2894]: E0317 17:40:01.165420 2894 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4adcae25-7455-44ff-b677-f9d47dd18d96-kube-api-access-ltk2q podName:4adcae25-7455-44ff-b677-f9d47dd18d96 nodeName:}" failed. No retries permitted until 2025-03-17 17:40:01.665397465 +0000 UTC m=+12.960061324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltk2q" (UniqueName: "kubernetes.io/projected/4adcae25-7455-44ff-b677-f9d47dd18d96-kube-api-access-ltk2q") pod "kube-proxy-ptcrg" (UID: "4adcae25-7455-44ff-b677-f9d47dd18d96") : configmap "kube-root-ca.crt" not found Mar 17 17:40:01.768687 kubelet[2894]: I0317 17:40:01.768529 2894 topology_manager.go:215] "Topology Admit Handler" podUID="5e5c4860-3d1c-4578-98c1-585dfb09f86e" podNamespace="tigera-operator" podName="tigera-operator-6479d6dc54-957jr" Mar 17 17:40:01.864264 kubelet[2894]: I0317 17:40:01.864141 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fcgf\" (UniqueName: \"kubernetes.io/projected/5e5c4860-3d1c-4578-98c1-585dfb09f86e-kube-api-access-8fcgf\") pod \"tigera-operator-6479d6dc54-957jr\" (UID: \"5e5c4860-3d1c-4578-98c1-585dfb09f86e\") " pod="tigera-operator/tigera-operator-6479d6dc54-957jr" Mar 17 17:40:01.864264 kubelet[2894]: I0317 17:40:01.864203 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e5c4860-3d1c-4578-98c1-585dfb09f86e-var-lib-calico\") pod \"tigera-operator-6479d6dc54-957jr\" (UID: \"5e5c4860-3d1c-4578-98c1-585dfb09f86e\") " pod="tigera-operator/tigera-operator-6479d6dc54-957jr" Mar 17 17:40:01.881858 kubelet[2894]: E0317 17:40:01.881813 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:01.882298 containerd[1595]: time="2025-03-17T17:40:01.882266052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ptcrg,Uid:4adcae25-7455-44ff-b677-f9d47dd18d96,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:02.075579 containerd[1595]: time="2025-03-17T17:40:02.075449169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-957jr,Uid:5e5c4860-3d1c-4578-98c1-585dfb09f86e,Namespace:tigera-operator,Attempt:0,}" Mar 17 17:40:02.196096 containerd[1595]: time="2025-03-17T17:40:02.195984673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:02.197041 containerd[1595]: time="2025-03-17T17:40:02.196862371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:02.197041 containerd[1595]: time="2025-03-17T17:40:02.196912344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:02.197156 containerd[1595]: time="2025-03-17T17:40:02.197029144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:02.204295 containerd[1595]: time="2025-03-17T17:40:02.204199618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:02.204480 containerd[1595]: time="2025-03-17T17:40:02.204444077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:02.204573 containerd[1595]: time="2025-03-17T17:40:02.204549244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:02.204767 containerd[1595]: time="2025-03-17T17:40:02.204731036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:02.242322 containerd[1595]: time="2025-03-17T17:40:02.242267049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ptcrg,Uid:4adcae25-7455-44ff-b677-f9d47dd18d96,Namespace:kube-system,Attempt:0,} returns sandbox id \"91e288ce45d7984243beee24fa4e9ede76c2ca1fe9cef2a230366294b86f8559\"" Mar 17 17:40:02.243277 kubelet[2894]: E0317 17:40:02.243211 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:02.247600 containerd[1595]: time="2025-03-17T17:40:02.247545210Z" level=info msg="CreateContainer within sandbox \"91e288ce45d7984243beee24fa4e9ede76c2ca1fe9cef2a230366294b86f8559\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:40:02.260064 containerd[1595]: time="2025-03-17T17:40:02.259953699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-957jr,Uid:5e5c4860-3d1c-4578-98c1-585dfb09f86e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"620de466f28b5a18313d0532257019b927a6d515e058d618564222751f21dd43\"" Mar 17 17:40:02.261801 containerd[1595]: time="2025-03-17T17:40:02.261587877Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 17 17:40:02.269015 containerd[1595]: time="2025-03-17T17:40:02.268965672Z" level=info msg="CreateContainer within sandbox \"91e288ce45d7984243beee24fa4e9ede76c2ca1fe9cef2a230366294b86f8559\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63bc529b9a082cab37a8d157b2d4c1833b8c62c6b9e5db07d6e32b651e3d500a\"" Mar 17 17:40:02.269570 containerd[1595]: time="2025-03-17T17:40:02.269536874Z" level=info msg="StartContainer for \"63bc529b9a082cab37a8d157b2d4c1833b8c62c6b9e5db07d6e32b651e3d500a\"" Mar 17 17:40:02.335558 containerd[1595]: time="2025-03-17T17:40:02.335427359Z" level=info msg="StartContainer for \"63bc529b9a082cab37a8d157b2d4c1833b8c62c6b9e5db07d6e32b651e3d500a\" returns successfully" Mar 17 17:40:02.827035 kubelet[2894]: E0317 17:40:02.826994 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:02.836589 kubelet[2894]: I0317 17:40:02.836374 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ptcrg" podStartSLOduration=2.836338004 podStartE2EDuration="2.836338004s" podCreationTimestamp="2025-03-17 17:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:40:02.836118132 +0000 UTC m=+14.130782001" watchObservedRunningTime="2025-03-17 17:40:02.836338004 +0000 UTC m=+14.131001863" Mar 17 17:40:04.599847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106856200.mount: Deactivated successfully. Mar 17 17:40:05.052856 containerd[1595]: time="2025-03-17T17:40:05.052769200Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:05.053820 containerd[1595]: time="2025-03-17T17:40:05.053768907Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 17 17:40:05.055303 containerd[1595]: time="2025-03-17T17:40:05.055272679Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:05.058069 containerd[1595]: time="2025-03-17T17:40:05.058014507Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:05.058796 containerd[1595]: time="2025-03-17T17:40:05.058756580Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 2.79714071s" Mar 17 17:40:05.058796 containerd[1595]: time="2025-03-17T17:40:05.058788079Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 17 17:40:05.060856 containerd[1595]: time="2025-03-17T17:40:05.060827999Z" level=info msg="CreateContainer within sandbox \"620de466f28b5a18313d0532257019b927a6d515e058d618564222751f21dd43\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 17:40:05.073422 containerd[1595]: time="2025-03-17T17:40:05.073375653Z" level=info msg="CreateContainer within sandbox \"620de466f28b5a18313d0532257019b927a6d515e058d618564222751f21dd43\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7e3b0937cfa6eaca5d5ea7d9bac9f79ba32c5bd73f771eadbdab49b014971411\"" Mar 17 17:40:05.074000 containerd[1595]: time="2025-03-17T17:40:05.073927880Z" level=info msg="StartContainer for \"7e3b0937cfa6eaca5d5ea7d9bac9f79ba32c5bd73f771eadbdab49b014971411\"" Mar 17 17:40:05.204141 containerd[1595]: time="2025-03-17T17:40:05.204074558Z" level=info msg="StartContainer for \"7e3b0937cfa6eaca5d5ea7d9bac9f79ba32c5bd73f771eadbdab49b014971411\" returns successfully" Mar 17 17:40:08.269213 kubelet[2894]: I0317 17:40:08.269123 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6479d6dc54-957jr" podStartSLOduration=4.470595209 podStartE2EDuration="7.269087927s" podCreationTimestamp="2025-03-17 17:40:01 +0000 UTC" firstStartedPulling="2025-03-17 17:40:02.261110341 +0000 UTC m=+13.555774200" lastFinishedPulling="2025-03-17 17:40:05.059603059 +0000 UTC m=+16.354266918" observedRunningTime="2025-03-17 17:40:05.844247903 +0000 UTC m=+17.138911783" watchObservedRunningTime="2025-03-17 17:40:08.269087927 +0000 UTC m=+19.563751796" Mar 17 17:40:08.272281 kubelet[2894]: I0317 17:40:08.270540 2894 topology_manager.go:215] "Topology Admit Handler" podUID="6dbc6ded-7098-40ea-94f7-9055dfdb5d73" podNamespace="calico-system" podName="calico-typha-f49565b7-xpvhw" Mar 17 17:40:08.313622 kubelet[2894]: I0317 17:40:08.313534 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfnl\" (UniqueName: \"kubernetes.io/projected/6dbc6ded-7098-40ea-94f7-9055dfdb5d73-kube-api-access-nkfnl\") pod \"calico-typha-f49565b7-xpvhw\" (UID: \"6dbc6ded-7098-40ea-94f7-9055dfdb5d73\") " pod="calico-system/calico-typha-f49565b7-xpvhw" Mar 17 17:40:08.313622 kubelet[2894]: I0317 17:40:08.313643 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dbc6ded-7098-40ea-94f7-9055dfdb5d73-tigera-ca-bundle\") pod \"calico-typha-f49565b7-xpvhw\" (UID: \"6dbc6ded-7098-40ea-94f7-9055dfdb5d73\") " pod="calico-system/calico-typha-f49565b7-xpvhw" Mar 17 17:40:08.313891 kubelet[2894]: I0317 17:40:08.313669 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6dbc6ded-7098-40ea-94f7-9055dfdb5d73-typha-certs\") pod \"calico-typha-f49565b7-xpvhw\" (UID: \"6dbc6ded-7098-40ea-94f7-9055dfdb5d73\") " pod="calico-system/calico-typha-f49565b7-xpvhw" Mar 17 17:40:08.427541 kubelet[2894]: I0317 17:40:08.427459 2894 topology_manager.go:215] "Topology Admit Handler" podUID="abbf8477-fea4-402a-83b4-a95440f9926e" podNamespace="calico-system" podName="calico-node-xzl4f" Mar 17 17:40:08.552007 kubelet[2894]: I0317 17:40:08.551757 2894 topology_manager.go:215] "Topology Admit Handler" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" podNamespace="calico-system" podName="csi-node-driver-24zxx" Mar 17 17:40:08.552160 kubelet[2894]: E0317 17:40:08.552122 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:08.582518 kubelet[2894]: E0317 17:40:08.582467 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:08.583847 containerd[1595]: time="2025-03-17T17:40:08.583372353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f49565b7-xpvhw,Uid:6dbc6ded-7098-40ea-94f7-9055dfdb5d73,Namespace:calico-system,Attempt:0,}" Mar 17 17:40:08.618153 kubelet[2894]: I0317 17:40:08.617925 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abbf8477-fea4-402a-83b4-a95440f9926e-tigera-ca-bundle\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618153 kubelet[2894]: I0317 17:40:08.618017 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf252\" (UniqueName: \"kubernetes.io/projected/abbf8477-fea4-402a-83b4-a95440f9926e-kube-api-access-kf252\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618153 kubelet[2894]: I0317 17:40:08.618048 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/abbf8477-fea4-402a-83b4-a95440f9926e-node-certs\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618153 kubelet[2894]: I0317 17:40:08.618070 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/abbf8477-fea4-402a-83b4-a95440f9926e-cni-log-dir\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618153 kubelet[2894]: I0317 17:40:08.618094 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abbf8477-fea4-402a-83b4-a95440f9926e-lib-modules\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618530 kubelet[2894]: I0317 17:40:08.618114 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/abbf8477-fea4-402a-83b4-a95440f9926e-flexvol-driver-host\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618530 kubelet[2894]: I0317 17:40:08.618133 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/abbf8477-fea4-402a-83b4-a95440f9926e-var-run-calico\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618530 kubelet[2894]: I0317 17:40:08.618153 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abbf8477-fea4-402a-83b4-a95440f9926e-xtables-lock\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618530 kubelet[2894]: I0317 17:40:08.618173 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/abbf8477-fea4-402a-83b4-a95440f9926e-policysync\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618530 kubelet[2894]: I0317 17:40:08.618193 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/abbf8477-fea4-402a-83b4-a95440f9926e-cni-bin-dir\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618698 kubelet[2894]: I0317 17:40:08.618216 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/abbf8477-fea4-402a-83b4-a95440f9926e-cni-net-dir\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.618698 kubelet[2894]: I0317 17:40:08.618260 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/abbf8477-fea4-402a-83b4-a95440f9926e-var-lib-calico\") pod \"calico-node-xzl4f\" (UID: \"abbf8477-fea4-402a-83b4-a95440f9926e\") " pod="calico-system/calico-node-xzl4f" Mar 17 17:40:08.721304 kubelet[2894]: I0317 17:40:08.720759 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6243402-8f9c-4b35-b2c7-317fe823ae81-kubelet-dir\") pod \"csi-node-driver-24zxx\" (UID: \"e6243402-8f9c-4b35-b2c7-317fe823ae81\") " pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:08.721304 kubelet[2894]: I0317 17:40:08.720838 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6243402-8f9c-4b35-b2c7-317fe823ae81-socket-dir\") pod \"csi-node-driver-24zxx\" (UID: \"e6243402-8f9c-4b35-b2c7-317fe823ae81\") " pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:08.721304 kubelet[2894]: I0317 17:40:08.720897 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e6243402-8f9c-4b35-b2c7-317fe823ae81-varrun\") pod \"csi-node-driver-24zxx\" (UID: \"e6243402-8f9c-4b35-b2c7-317fe823ae81\") " pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:08.721304 kubelet[2894]: I0317 17:40:08.720919 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6243402-8f9c-4b35-b2c7-317fe823ae81-registration-dir\") pod \"csi-node-driver-24zxx\" (UID: \"e6243402-8f9c-4b35-b2c7-317fe823ae81\") " pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:08.721304 kubelet[2894]: I0317 17:40:08.720943 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xqhc\" (UniqueName: \"kubernetes.io/projected/e6243402-8f9c-4b35-b2c7-317fe823ae81-kube-api-access-9xqhc\") pod \"csi-node-driver-24zxx\" (UID: \"e6243402-8f9c-4b35-b2c7-317fe823ae81\") " pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:08.723104 containerd[1595]: time="2025-03-17T17:40:08.722125913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:08.723104 containerd[1595]: time="2025-03-17T17:40:08.722216162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:08.723104 containerd[1595]: time="2025-03-17T17:40:08.722258922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:08.723104 containerd[1595]: time="2025-03-17T17:40:08.722386322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:08.732254 kubelet[2894]: E0317 17:40:08.731878 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.732254 kubelet[2894]: W0317 17:40:08.731932 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.732254 kubelet[2894]: E0317 17:40:08.731999 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.787209 kubelet[2894]: E0317 17:40:08.787095 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.787209 kubelet[2894]: W0317 17:40:08.787124 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.787209 kubelet[2894]: E0317 17:40:08.787152 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.815098 containerd[1595]: time="2025-03-17T17:40:08.814906829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f49565b7-xpvhw,Uid:6dbc6ded-7098-40ea-94f7-9055dfdb5d73,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb383e3c8ffa0527a44ddeb67d5855a6f59269e926527fdb136010fb0609dcf1\"" Mar 17 17:40:08.820352 kubelet[2894]: E0317 17:40:08.816979 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:08.821868 containerd[1595]: time="2025-03-17T17:40:08.821802451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 17 17:40:08.822285 kubelet[2894]: E0317 17:40:08.822084 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.822285 kubelet[2894]: W0317 17:40:08.822112 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.822285 kubelet[2894]: E0317 17:40:08.822140 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.825109 kubelet[2894]: E0317 17:40:08.825075 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.825387 kubelet[2894]: W0317 17:40:08.825208 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.825387 kubelet[2894]: E0317 17:40:08.825284 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.825910 kubelet[2894]: E0317 17:40:08.825786 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.825910 kubelet[2894]: W0317 17:40:08.825823 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.825910 kubelet[2894]: E0317 17:40:08.825868 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.826848 kubelet[2894]: E0317 17:40:08.826830 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.827126 kubelet[2894]: W0317 17:40:08.826923 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.827126 kubelet[2894]: E0317 17:40:08.826965 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.827361 kubelet[2894]: E0317 17:40:08.827344 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.827430 kubelet[2894]: W0317 17:40:08.827417 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.827575 kubelet[2894]: E0317 17:40:08.827559 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.828903 kubelet[2894]: E0317 17:40:08.828864 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.828903 kubelet[2894]: W0317 17:40:08.828882 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.829146 kubelet[2894]: E0317 17:40:08.829120 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.829480 kubelet[2894]: E0317 17:40:08.829466 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.829480 kubelet[2894]: W0317 17:40:08.829477 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.829619 kubelet[2894]: E0317 17:40:08.829606 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.829821 kubelet[2894]: E0317 17:40:08.829796 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.829821 kubelet[2894]: W0317 17:40:08.829805 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.830152 kubelet[2894]: E0317 17:40:08.829858 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.830152 kubelet[2894]: E0317 17:40:08.830000 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.830152 kubelet[2894]: W0317 17:40:08.830007 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.830152 kubelet[2894]: E0317 17:40:08.830019 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.830350 kubelet[2894]: E0317 17:40:08.830279 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.830350 kubelet[2894]: W0317 17:40:08.830292 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.830350 kubelet[2894]: E0317 17:40:08.830313 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.830582 kubelet[2894]: E0317 17:40:08.830550 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.830582 kubelet[2894]: W0317 17:40:08.830566 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.830582 kubelet[2894]: E0317 17:40:08.830583 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.830817 kubelet[2894]: E0317 17:40:08.830798 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.830817 kubelet[2894]: W0317 17:40:08.830813 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.830901 kubelet[2894]: E0317 17:40:08.830836 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.831052 kubelet[2894]: E0317 17:40:08.831032 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.831052 kubelet[2894]: W0317 17:40:08.831046 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.831133 kubelet[2894]: E0317 17:40:08.831070 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.831290 kubelet[2894]: E0317 17:40:08.831272 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.831290 kubelet[2894]: W0317 17:40:08.831286 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.831394 kubelet[2894]: E0317 17:40:08.831308 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.831531 kubelet[2894]: E0317 17:40:08.831511 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.831531 kubelet[2894]: W0317 17:40:08.831524 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.831604 kubelet[2894]: E0317 17:40:08.831545 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.831785 kubelet[2894]: E0317 17:40:08.831737 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.831785 kubelet[2894]: W0317 17:40:08.831766 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.831785 kubelet[2894]: E0317 17:40:08.831784 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.832079 kubelet[2894]: E0317 17:40:08.832056 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.832079 kubelet[2894]: W0317 17:40:08.832073 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.832127 kubelet[2894]: E0317 17:40:08.832088 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.832377 kubelet[2894]: E0317 17:40:08.832361 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.832377 kubelet[2894]: W0317 17:40:08.832375 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.832448 kubelet[2894]: E0317 17:40:08.832387 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.832635 kubelet[2894]: E0317 17:40:08.832615 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.832635 kubelet[2894]: W0317 17:40:08.832626 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.832682 kubelet[2894]: E0317 17:40:08.832637 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.833009 kubelet[2894]: E0317 17:40:08.832964 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.833009 kubelet[2894]: W0317 17:40:08.832999 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.833131 kubelet[2894]: E0317 17:40:08.833036 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.833396 kubelet[2894]: E0317 17:40:08.833370 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.833396 kubelet[2894]: W0317 17:40:08.833382 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.833396 kubelet[2894]: E0317 17:40:08.833396 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.833770 kubelet[2894]: E0317 17:40:08.833746 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.833770 kubelet[2894]: W0317 17:40:08.833764 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.833868 kubelet[2894]: E0317 17:40:08.833785 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.834035 kubelet[2894]: E0317 17:40:08.834019 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.834073 kubelet[2894]: W0317 17:40:08.834034 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.834073 kubelet[2894]: E0317 17:40:08.834046 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.834275 kubelet[2894]: E0317 17:40:08.834254 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.834275 kubelet[2894]: W0317 17:40:08.834269 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.834334 kubelet[2894]: E0317 17:40:08.834280 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.835639 kubelet[2894]: E0317 17:40:08.834517 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.835639 kubelet[2894]: W0317 17:40:08.834533 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.835639 kubelet[2894]: E0317 17:40:08.834544 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.932438 kubelet[2894]: E0317 17:40:08.932380 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.932438 kubelet[2894]: W0317 17:40:08.932416 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.932438 kubelet[2894]: E0317 17:40:08.932446 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:08.987953 kubelet[2894]: E0317 17:40:08.987904 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:08.987953 kubelet[2894]: W0317 17:40:08.987934 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:08.988213 kubelet[2894]: E0317 17:40:08.987976 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:09.044396 kubelet[2894]: E0317 17:40:09.043543 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:09.045258 containerd[1595]: time="2025-03-17T17:40:09.045188960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xzl4f,Uid:abbf8477-fea4-402a-83b4-a95440f9926e,Namespace:calico-system,Attempt:0,}" Mar 17 17:40:09.134467 containerd[1595]: time="2025-03-17T17:40:09.132416073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:09.134467 containerd[1595]: time="2025-03-17T17:40:09.132494580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:09.134467 containerd[1595]: time="2025-03-17T17:40:09.132510591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:09.134467 containerd[1595]: time="2025-03-17T17:40:09.132622050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:09.189546 containerd[1595]: time="2025-03-17T17:40:09.189448578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xzl4f,Uid:abbf8477-fea4-402a-83b4-a95440f9926e,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2b790914055531bd27801599a74fcacd9daaf810409971fc4a1dcf80c9de97c\"" Mar 17 17:40:09.191211 kubelet[2894]: E0317 17:40:09.190831 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:10.787742 kubelet[2894]: E0317 17:40:10.787688 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:12.791841 kubelet[2894]: E0317 17:40:12.791113 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:13.955435 containerd[1595]: time="2025-03-17T17:40:13.955327612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:13.962779 containerd[1595]: time="2025-03-17T17:40:13.962582324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 17 17:40:13.963492 containerd[1595]: time="2025-03-17T17:40:13.963404847Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:13.969958 containerd[1595]: time="2025-03-17T17:40:13.968651913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:13.969958 containerd[1595]: time="2025-03-17T17:40:13.969775622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 5.147923918s" Mar 17 17:40:13.969958 containerd[1595]: time="2025-03-17T17:40:13.969825655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 17 17:40:13.979567 containerd[1595]: time="2025-03-17T17:40:13.979509205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:40:14.001354 containerd[1595]: time="2025-03-17T17:40:14.001246501Z" level=info msg="CreateContainer within sandbox \"eb383e3c8ffa0527a44ddeb67d5855a6f59269e926527fdb136010fb0609dcf1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 17:40:14.054265 containerd[1595]: time="2025-03-17T17:40:14.054092895Z" level=info msg="CreateContainer within sandbox \"eb383e3c8ffa0527a44ddeb67d5855a6f59269e926527fdb136010fb0609dcf1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"61118bd6b7d6de3117c3fd424230057da23479e4478af09abbb5fe31c1a09607\"" Mar 17 17:40:14.055696 containerd[1595]: time="2025-03-17T17:40:14.055342940Z" level=info msg="StartContainer for \"61118bd6b7d6de3117c3fd424230057da23479e4478af09abbb5fe31c1a09607\"" Mar 17 17:40:14.212127 containerd[1595]: time="2025-03-17T17:40:14.211284126Z" level=info msg="StartContainer for \"61118bd6b7d6de3117c3fd424230057da23479e4478af09abbb5fe31c1a09607\" returns successfully" Mar 17 17:40:14.788934 kubelet[2894]: E0317 17:40:14.788867 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:14.885490 kubelet[2894]: E0317 17:40:14.884141 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:14.900334 kubelet[2894]: E0317 17:40:14.899884 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.900334 kubelet[2894]: W0317 17:40:14.899928 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.900334 kubelet[2894]: E0317 17:40:14.899962 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.900334 kubelet[2894]: E0317 17:40:14.900307 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.900334 kubelet[2894]: W0317 17:40:14.900316 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.900334 kubelet[2894]: E0317 17:40:14.900326 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.900988 kubelet[2894]: E0317 17:40:14.900890 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.900988 kubelet[2894]: W0317 17:40:14.900903 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.900988 kubelet[2894]: E0317 17:40:14.900914 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.901278 kubelet[2894]: E0317 17:40:14.901142 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.901278 kubelet[2894]: W0317 17:40:14.901153 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.901278 kubelet[2894]: E0317 17:40:14.901163 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.904573 kubelet[2894]: E0317 17:40:14.904531 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.904573 kubelet[2894]: W0317 17:40:14.904559 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.904573 kubelet[2894]: E0317 17:40:14.904586 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.904909 kubelet[2894]: E0317 17:40:14.904878 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.904909 kubelet[2894]: W0317 17:40:14.904893 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.904984 kubelet[2894]: E0317 17:40:14.904908 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.905214 kubelet[2894]: E0317 17:40:14.905198 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.905214 kubelet[2894]: W0317 17:40:14.905213 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.905312 kubelet[2894]: E0317 17:40:14.905260 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.905501 kubelet[2894]: E0317 17:40:14.905482 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.905501 kubelet[2894]: W0317 17:40:14.905497 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.905572 kubelet[2894]: E0317 17:40:14.905508 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.905788 kubelet[2894]: E0317 17:40:14.905764 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.905788 kubelet[2894]: W0317 17:40:14.905786 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.905837 kubelet[2894]: E0317 17:40:14.905797 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.906031 kubelet[2894]: E0317 17:40:14.906009 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.906031 kubelet[2894]: W0317 17:40:14.906022 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.906117 kubelet[2894]: E0317 17:40:14.906034 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.906988 kubelet[2894]: E0317 17:40:14.906938 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.906988 kubelet[2894]: W0317 17:40:14.906976 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.907078 kubelet[2894]: E0317 17:40:14.907011 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.907833 kubelet[2894]: E0317 17:40:14.907809 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.907833 kubelet[2894]: W0317 17:40:14.907827 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.907924 kubelet[2894]: E0317 17:40:14.907843 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.908346 kubelet[2894]: E0317 17:40:14.908210 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.908346 kubelet[2894]: W0317 17:40:14.908242 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.908346 kubelet[2894]: E0317 17:40:14.908259 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.909197 kubelet[2894]: E0317 17:40:14.908960 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.909197 kubelet[2894]: W0317 17:40:14.908994 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.909197 kubelet[2894]: E0317 17:40:14.909008 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.909493 kubelet[2894]: E0317 17:40:14.909372 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:14.909493 kubelet[2894]: W0317 17:40:14.909392 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:14.909493 kubelet[2894]: E0317 17:40:14.909421 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:14.936568 kubelet[2894]: I0317 17:40:14.936476 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f49565b7-xpvhw" podStartSLOduration=1.7797614309999998 podStartE2EDuration="6.936450316s" podCreationTimestamp="2025-03-17 17:40:08 +0000 UTC" firstStartedPulling="2025-03-17 17:40:08.81959036 +0000 UTC m=+20.114254219" lastFinishedPulling="2025-03-17 17:40:13.976279245 +0000 UTC m=+25.270943104" observedRunningTime="2025-03-17 17:40:14.919341415 +0000 UTC m=+26.214005284" watchObservedRunningTime="2025-03-17 17:40:14.936450316 +0000 UTC m=+26.231114175" Mar 17 17:40:15.007856 kubelet[2894]: E0317 17:40:15.007534 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.007856 kubelet[2894]: W0317 17:40:15.007566 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.007856 kubelet[2894]: E0317 17:40:15.007598 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.011745 kubelet[2894]: E0317 17:40:15.011668 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.011745 kubelet[2894]: W0317 17:40:15.011700 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.011745 kubelet[2894]: E0317 17:40:15.011726 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.017253 kubelet[2894]: E0317 17:40:15.013748 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.017253 kubelet[2894]: W0317 17:40:15.013785 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.017253 kubelet[2894]: E0317 17:40:15.013810 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.017253 kubelet[2894]: E0317 17:40:15.015322 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.017253 kubelet[2894]: W0317 17:40:15.015337 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.017253 kubelet[2894]: E0317 17:40:15.015424 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.017253 kubelet[2894]: E0317 17:40:15.016467 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.017253 kubelet[2894]: W0317 17:40:15.016481 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.017253 kubelet[2894]: E0317 17:40:15.016748 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.022306 kubelet[2894]: E0317 17:40:15.021417 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.022306 kubelet[2894]: W0317 17:40:15.021456 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.030305 kubelet[2894]: E0317 17:40:15.023365 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.030305 kubelet[2894]: E0317 17:40:15.024272 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.030305 kubelet[2894]: W0317 17:40:15.024288 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.030305 kubelet[2894]: E0317 17:40:15.025945 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.030305 kubelet[2894]: E0317 17:40:15.026269 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.030305 kubelet[2894]: W0317 17:40:15.026281 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.030305 kubelet[2894]: E0317 17:40:15.026427 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.030972 kubelet[2894]: E0317 17:40:15.030926 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.030972 kubelet[2894]: W0317 17:40:15.030955 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.031299 kubelet[2894]: E0317 17:40:15.031122 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.031434 kubelet[2894]: E0317 17:40:15.031402 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.031434 kubelet[2894]: W0317 17:40:15.031421 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.031590 kubelet[2894]: E0317 17:40:15.031538 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.032003 kubelet[2894]: E0317 17:40:15.031792 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.032003 kubelet[2894]: W0317 17:40:15.031810 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.032003 kubelet[2894]: E0317 17:40:15.031917 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.032420 kubelet[2894]: E0317 17:40:15.032191 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.032420 kubelet[2894]: W0317 17:40:15.032208 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.032420 kubelet[2894]: E0317 17:40:15.032244 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.032806 kubelet[2894]: E0317 17:40:15.032639 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.032806 kubelet[2894]: W0317 17:40:15.032655 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.032806 kubelet[2894]: E0317 17:40:15.032673 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.038769 kubelet[2894]: E0317 17:40:15.038701 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.038769 kubelet[2894]: W0317 17:40:15.038747 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.042906 kubelet[2894]: E0317 17:40:15.039403 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.042906 kubelet[2894]: E0317 17:40:15.039475 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.042906 kubelet[2894]: W0317 17:40:15.039491 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.042906 kubelet[2894]: E0317 17:40:15.039509 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.042906 kubelet[2894]: E0317 17:40:15.039952 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.042906 kubelet[2894]: W0317 17:40:15.039963 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.042906 kubelet[2894]: E0317 17:40:15.039982 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.042906 kubelet[2894]: E0317 17:40:15.040203 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.042906 kubelet[2894]: W0317 17:40:15.040213 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.042906 kubelet[2894]: E0317 17:40:15.040255 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.043330 kubelet[2894]: E0317 17:40:15.043165 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.044735 kubelet[2894]: W0317 17:40:15.043350 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.044735 kubelet[2894]: E0317 17:40:15.043544 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.890164 kubelet[2894]: E0317 17:40:15.889664 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:15.931175 kubelet[2894]: E0317 17:40:15.931107 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.931175 kubelet[2894]: W0317 17:40:15.931142 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.931175 kubelet[2894]: E0317 17:40:15.931174 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.931497 kubelet[2894]: E0317 17:40:15.931460 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.931497 kubelet[2894]: W0317 17:40:15.931470 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.931497 kubelet[2894]: E0317 17:40:15.931481 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.958011 kubelet[2894]: E0317 17:40:15.957943 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.958011 kubelet[2894]: W0317 17:40:15.957990 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.958286 kubelet[2894]: E0317 17:40:15.958025 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.958532 kubelet[2894]: E0317 17:40:15.958499 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.958532 kubelet[2894]: W0317 17:40:15.958521 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.958627 kubelet[2894]: E0317 17:40:15.958537 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.959736 kubelet[2894]: E0317 17:40:15.958807 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.959736 kubelet[2894]: W0317 17:40:15.958829 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.959736 kubelet[2894]: E0317 17:40:15.958843 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.959736 kubelet[2894]: E0317 17:40:15.959045 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.959736 kubelet[2894]: W0317 17:40:15.959055 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.959736 kubelet[2894]: E0317 17:40:15.959066 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.960207 kubelet[2894]: E0317 17:40:15.960160 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.960207 kubelet[2894]: W0317 17:40:15.960184 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.960207 kubelet[2894]: E0317 17:40:15.960200 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.962985 kubelet[2894]: E0317 17:40:15.962759 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.962985 kubelet[2894]: W0317 17:40:15.962779 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.962985 kubelet[2894]: E0317 17:40:15.962799 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.963840 kubelet[2894]: E0317 17:40:15.963807 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.963840 kubelet[2894]: W0317 17:40:15.963829 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.963948 kubelet[2894]: E0317 17:40:15.963843 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.964104 kubelet[2894]: E0317 17:40:15.964085 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.964104 kubelet[2894]: W0317 17:40:15.964101 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.964474 kubelet[2894]: E0317 17:40:15.964114 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.964474 kubelet[2894]: E0317 17:40:15.964405 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.964474 kubelet[2894]: W0317 17:40:15.964416 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.964474 kubelet[2894]: E0317 17:40:15.964428 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.964807 kubelet[2894]: E0317 17:40:15.964673 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.964807 kubelet[2894]: W0317 17:40:15.964684 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.964807 kubelet[2894]: E0317 17:40:15.964698 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.965001 kubelet[2894]: E0317 17:40:15.964933 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.965001 kubelet[2894]: W0317 17:40:15.964951 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.965001 kubelet[2894]: E0317 17:40:15.964962 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.965336 kubelet[2894]: E0317 17:40:15.965328 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.965384 kubelet[2894]: W0317 17:40:15.965338 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.965384 kubelet[2894]: E0317 17:40:15.965350 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:15.971329 kubelet[2894]: E0317 17:40:15.971269 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:15.971329 kubelet[2894]: W0317 17:40:15.971311 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:15.971538 kubelet[2894]: E0317 17:40:15.971350 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.043781 kubelet[2894]: E0317 17:40:16.043613 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.043781 kubelet[2894]: W0317 17:40:16.043763 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.044002 kubelet[2894]: E0317 17:40:16.043807 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.045807 kubelet[2894]: E0317 17:40:16.044297 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.045807 kubelet[2894]: W0317 17:40:16.044310 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.045807 kubelet[2894]: E0317 17:40:16.044334 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.046101 kubelet[2894]: E0317 17:40:16.045902 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.046101 kubelet[2894]: W0317 17:40:16.045933 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.046101 kubelet[2894]: E0317 17:40:16.045969 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.050120 kubelet[2894]: E0317 17:40:16.050019 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.050120 kubelet[2894]: W0317 17:40:16.050063 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.050356 kubelet[2894]: E0317 17:40:16.050297 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.050652 kubelet[2894]: E0317 17:40:16.050620 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.050834 kubelet[2894]: W0317 17:40:16.050721 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.050998 kubelet[2894]: E0317 17:40:16.050913 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.051459 kubelet[2894]: E0317 17:40:16.051426 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.051459 kubelet[2894]: W0317 17:40:16.051447 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.051589 kubelet[2894]: E0317 17:40:16.051559 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.052029 kubelet[2894]: E0317 17:40:16.052008 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.052029 kubelet[2894]: W0317 17:40:16.052024 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.052106 kubelet[2894]: E0317 17:40:16.052088 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.052353 kubelet[2894]: E0317 17:40:16.052328 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.052353 kubelet[2894]: W0317 17:40:16.052342 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.052427 kubelet[2894]: E0317 17:40:16.052359 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.053167 kubelet[2894]: E0317 17:40:16.053128 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.053167 kubelet[2894]: W0317 17:40:16.053152 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.053380 kubelet[2894]: E0317 17:40:16.053178 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.053490 kubelet[2894]: E0317 17:40:16.053460 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.053490 kubelet[2894]: W0317 17:40:16.053476 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.053490 kubelet[2894]: E0317 17:40:16.053487 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.053762 kubelet[2894]: E0317 17:40:16.053730 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.053762 kubelet[2894]: W0317 17:40:16.053756 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.054676 kubelet[2894]: E0317 17:40:16.053875 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.056932 kubelet[2894]: E0317 17:40:16.054923 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.056932 kubelet[2894]: W0317 17:40:16.054938 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.056932 kubelet[2894]: E0317 17:40:16.055138 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.056932 kubelet[2894]: W0317 17:40:16.055146 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.056932 kubelet[2894]: E0317 17:40:16.055332 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.056932 kubelet[2894]: E0317 17:40:16.055379 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.059285 kubelet[2894]: E0317 17:40:16.057424 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.059285 kubelet[2894]: W0317 17:40:16.057443 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.059285 kubelet[2894]: E0317 17:40:16.057474 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.059285 kubelet[2894]: E0317 17:40:16.057811 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.059285 kubelet[2894]: W0317 17:40:16.057823 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.059285 kubelet[2894]: E0317 17:40:16.057838 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.059285 kubelet[2894]: E0317 17:40:16.058094 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.059285 kubelet[2894]: W0317 17:40:16.058106 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.059285 kubelet[2894]: E0317 17:40:16.058118 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.059285 kubelet[2894]: E0317 17:40:16.058723 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.059711 kubelet[2894]: W0317 17:40:16.058736 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.059711 kubelet[2894]: E0317 17:40:16.058759 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.059711 kubelet[2894]: E0317 17:40:16.059143 2894 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:16.059711 kubelet[2894]: W0317 17:40:16.059176 2894 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:16.059711 kubelet[2894]: E0317 17:40:16.059190 2894 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:16.401257 containerd[1595]: time="2025-03-17T17:40:16.401124103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:16.403273 containerd[1595]: time="2025-03-17T17:40:16.402729565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 17 17:40:16.407366 containerd[1595]: time="2025-03-17T17:40:16.406407485Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:16.409517 containerd[1595]: time="2025-03-17T17:40:16.409398117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:16.410758 containerd[1595]: time="2025-03-17T17:40:16.410339823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 2.430553538s" Mar 17 17:40:16.410758 containerd[1595]: time="2025-03-17T17:40:16.410413101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 17:40:16.421155 containerd[1595]: time="2025-03-17T17:40:16.420993040Z" level=info msg="CreateContainer within sandbox \"b2b790914055531bd27801599a74fcacd9daaf810409971fc4a1dcf80c9de97c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:40:16.482784 containerd[1595]: time="2025-03-17T17:40:16.480001175Z" level=info msg="CreateContainer within sandbox \"b2b790914055531bd27801599a74fcacd9daaf810409971fc4a1dcf80c9de97c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e648e2aee8600ade8519eb097fce4bd8ee4018fd745e6abd77c52703e3b6cab7\"" Mar 17 17:40:16.482784 containerd[1595]: time="2025-03-17T17:40:16.480892888Z" level=info msg="StartContainer for \"e648e2aee8600ade8519eb097fce4bd8ee4018fd745e6abd77c52703e3b6cab7\"" Mar 17 17:40:16.641804 containerd[1595]: time="2025-03-17T17:40:16.641567841Z" level=info msg="StartContainer for \"e648e2aee8600ade8519eb097fce4bd8ee4018fd745e6abd77c52703e3b6cab7\" returns successfully" Mar 17 17:40:16.685742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e648e2aee8600ade8519eb097fce4bd8ee4018fd745e6abd77c52703e3b6cab7-rootfs.mount: Deactivated successfully. Mar 17 17:40:16.787407 kubelet[2894]: E0317 17:40:16.787333 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:16.893086 kubelet[2894]: E0317 17:40:16.893019 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:16.894694 kubelet[2894]: E0317 17:40:16.894660 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:17.090743 containerd[1595]: time="2025-03-17T17:40:17.088853199Z" level=info msg="shim disconnected" id=e648e2aee8600ade8519eb097fce4bd8ee4018fd745e6abd77c52703e3b6cab7 namespace=k8s.io Mar 17 17:40:17.090743 containerd[1595]: time="2025-03-17T17:40:17.088923121Z" level=warning msg="cleaning up after shim disconnected" id=e648e2aee8600ade8519eb097fce4bd8ee4018fd745e6abd77c52703e3b6cab7 namespace=k8s.io Mar 17 17:40:17.090743 containerd[1595]: time="2025-03-17T17:40:17.088934653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:17.899868 kubelet[2894]: E0317 17:40:17.899384 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:17.902530 containerd[1595]: time="2025-03-17T17:40:17.902037935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:40:18.787737 kubelet[2894]: E0317 17:40:18.787631 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:20.795784 kubelet[2894]: E0317 17:40:20.794945 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:22.795753 kubelet[2894]: E0317 17:40:22.795316 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:23.319762 systemd[1]: Started sshd@7-10.0.0.27:22-10.0.0.1:37746.service - OpenSSH per-connection server daemon (10.0.0.1:37746). Mar 17 17:40:23.394794 sshd[3587]: Accepted publickey for core from 10.0.0.1 port 37746 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:23.397610 sshd-session[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:23.408350 systemd-logind[1578]: New session 8 of user core. Mar 17 17:40:23.417675 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:40:23.730152 sshd[3590]: Connection closed by 10.0.0.1 port 37746 Mar 17 17:40:23.731512 sshd-session[3587]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:23.736823 systemd[1]: sshd@7-10.0.0.27:22-10.0.0.1:37746.service: Deactivated successfully. Mar 17 17:40:23.740776 systemd-logind[1578]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:40:23.743056 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:40:23.745020 systemd-logind[1578]: Removed session 8. Mar 17 17:40:24.848661 kubelet[2894]: E0317 17:40:24.848538 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:25.479566 systemd-resolved[1460]: Under memory pressure, flushing caches. Mar 17 17:40:25.482511 systemd-journald[1154]: Under memory pressure, flushing caches. Mar 17 17:40:25.479616 systemd-resolved[1460]: Flushed all caches. Mar 17 17:40:25.649576 containerd[1595]: time="2025-03-17T17:40:25.647961933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:25.705540 containerd[1595]: time="2025-03-17T17:40:25.705431229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 17:40:25.736257 containerd[1595]: time="2025-03-17T17:40:25.736042896Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:25.788567 containerd[1595]: time="2025-03-17T17:40:25.788411083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:25.790549 containerd[1595]: time="2025-03-17T17:40:25.790080596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 7.887976477s" Mar 17 17:40:25.790549 containerd[1595]: time="2025-03-17T17:40:25.790142616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 17:40:25.821802 containerd[1595]: time="2025-03-17T17:40:25.821712823Z" level=info msg="CreateContainer within sandbox \"b2b790914055531bd27801599a74fcacd9daaf810409971fc4a1dcf80c9de97c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:40:26.303729 containerd[1595]: time="2025-03-17T17:40:26.302963633Z" level=info msg="CreateContainer within sandbox \"b2b790914055531bd27801599a74fcacd9daaf810409971fc4a1dcf80c9de97c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2209edf37845f4ea07d37fb461804e61a91ed3cfb12a4acc5aeeef279157f58d\"" Mar 17 17:40:26.305161 containerd[1595]: time="2025-03-17T17:40:26.305005882Z" level=info msg="StartContainer for \"2209edf37845f4ea07d37fb461804e61a91ed3cfb12a4acc5aeeef279157f58d\"" Mar 17 17:40:26.787810 kubelet[2894]: E0317 17:40:26.787732 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:27.643643 containerd[1595]: time="2025-03-17T17:40:27.643092908Z" level=info msg="StartContainer for \"2209edf37845f4ea07d37fb461804e61a91ed3cfb12a4acc5aeeef279157f58d\" returns successfully" Mar 17 17:40:27.648536 kubelet[2894]: E0317 17:40:27.648037 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:28.656350 kubelet[2894]: E0317 17:40:28.653976 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:28.759121 systemd[1]: Started sshd@8-10.0.0.27:22-10.0.0.1:33104.service - OpenSSH per-connection server daemon (10.0.0.1:33104). Mar 17 17:40:28.787936 kubelet[2894]: E0317 17:40:28.787819 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:29.020783 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 33104 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:29.022422 sshd-session[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:29.029490 systemd-logind[1578]: New session 9 of user core. Mar 17 17:40:29.042009 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:40:29.269124 sshd[3645]: Connection closed by 10.0.0.1 port 33104 Mar 17 17:40:29.271490 sshd-session[3642]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:29.277302 systemd[1]: sshd@8-10.0.0.27:22-10.0.0.1:33104.service: Deactivated successfully. Mar 17 17:40:29.280785 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:40:29.280849 systemd-logind[1578]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:40:29.282867 systemd-logind[1578]: Removed session 9. Mar 17 17:40:29.782992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2209edf37845f4ea07d37fb461804e61a91ed3cfb12a4acc5aeeef279157f58d-rootfs.mount: Deactivated successfully. Mar 17 17:40:29.800167 containerd[1595]: time="2025-03-17T17:40:29.799715880Z" level=info msg="shim disconnected" id=2209edf37845f4ea07d37fb461804e61a91ed3cfb12a4acc5aeeef279157f58d namespace=k8s.io Mar 17 17:40:29.800167 containerd[1595]: time="2025-03-17T17:40:29.799805042Z" level=warning msg="cleaning up after shim disconnected" id=2209edf37845f4ea07d37fb461804e61a91ed3cfb12a4acc5aeeef279157f58d namespace=k8s.io Mar 17 17:40:29.800167 containerd[1595]: time="2025-03-17T17:40:29.799817726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:29.825910 kubelet[2894]: I0317 17:40:29.825877 2894 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:40:29.910600 containerd[1595]: time="2025-03-17T17:40:29.909725022Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:40:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:40:29.918538 kubelet[2894]: I0317 17:40:29.918474 2894 topology_manager.go:215] "Topology Admit Handler" podUID="1cbd3c90-0c66-408d-9e5d-1382eccfbde6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:29.924916 kubelet[2894]: I0317 17:40:29.924687 2894 topology_manager.go:215] "Topology Admit Handler" podUID="05bc58a2-8b10-4350-b41e-7b091d9a3a8c" podNamespace="calico-apiserver" podName="calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:29.924916 kubelet[2894]: I0317 17:40:29.924889 2894 topology_manager.go:215] "Topology Admit Handler" podUID="43ddfd49-802e-4437-b6f0-ed427cdd6be8" podNamespace="calico-system" podName="calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:29.937298 kubelet[2894]: I0317 17:40:29.934618 2894 topology_manager.go:215] "Topology Admit Handler" podUID="e68c1525-3bc8-4435-a253-fa308a8e7604" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:29.937298 kubelet[2894]: I0317 17:40:29.936543 2894 topology_manager.go:215] "Topology Admit Handler" podUID="c6ebfa09-1d89-41a1-975e-0d041b544630" podNamespace="calico-apiserver" podName="calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:30.058267 kubelet[2894]: I0317 17:40:30.053973 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdf9w\" (UniqueName: \"kubernetes.io/projected/c6ebfa09-1d89-41a1-975e-0d041b544630-kube-api-access-cdf9w\") pod \"calico-apiserver-779d48f5d9-dsbpp\" (UID: \"c6ebfa09-1d89-41a1-975e-0d041b544630\") " pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:30.058267 kubelet[2894]: I0317 17:40:30.054072 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frclv\" (UniqueName: \"kubernetes.io/projected/1cbd3c90-0c66-408d-9e5d-1382eccfbde6-kube-api-access-frclv\") pod \"coredns-7db6d8ff4d-5xpt7\" (UID: \"1cbd3c90-0c66-408d-9e5d-1382eccfbde6\") " pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:30.058267 kubelet[2894]: I0317 17:40:30.054104 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c6ebfa09-1d89-41a1-975e-0d041b544630-calico-apiserver-certs\") pod \"calico-apiserver-779d48f5d9-dsbpp\" (UID: \"c6ebfa09-1d89-41a1-975e-0d041b544630\") " pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:30.058267 kubelet[2894]: I0317 17:40:30.054130 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/05bc58a2-8b10-4350-b41e-7b091d9a3a8c-calico-apiserver-certs\") pod \"calico-apiserver-779d48f5d9-9lw4k\" (UID: \"05bc58a2-8b10-4350-b41e-7b091d9a3a8c\") " pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:30.058267 kubelet[2894]: I0317 17:40:30.054156 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw2dd\" (UniqueName: \"kubernetes.io/projected/e68c1525-3bc8-4435-a253-fa308a8e7604-kube-api-access-pw2dd\") pod \"coredns-7db6d8ff4d-j5l2k\" (UID: \"e68c1525-3bc8-4435-a253-fa308a8e7604\") " pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:30.058611 kubelet[2894]: I0317 17:40:30.054185 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cbd3c90-0c66-408d-9e5d-1382eccfbde6-config-volume\") pod \"coredns-7db6d8ff4d-5xpt7\" (UID: \"1cbd3c90-0c66-408d-9e5d-1382eccfbde6\") " pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:30.058611 kubelet[2894]: I0317 17:40:30.054215 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e68c1525-3bc8-4435-a253-fa308a8e7604-config-volume\") pod \"coredns-7db6d8ff4d-j5l2k\" (UID: \"e68c1525-3bc8-4435-a253-fa308a8e7604\") " pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:30.058611 kubelet[2894]: I0317 17:40:30.054262 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzv2m\" (UniqueName: \"kubernetes.io/projected/05bc58a2-8b10-4350-b41e-7b091d9a3a8c-kube-api-access-kzv2m\") pod \"calico-apiserver-779d48f5d9-9lw4k\" (UID: \"05bc58a2-8b10-4350-b41e-7b091d9a3a8c\") " pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:30.058611 kubelet[2894]: I0317 17:40:30.054286 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43ddfd49-802e-4437-b6f0-ed427cdd6be8-tigera-ca-bundle\") pod \"calico-kube-controllers-5b6b58f89d-g52xg\" (UID: \"43ddfd49-802e-4437-b6f0-ed427cdd6be8\") " pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:30.058611 kubelet[2894]: I0317 17:40:30.054308 2894 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtw9k\" (UniqueName: \"kubernetes.io/projected/43ddfd49-802e-4437-b6f0-ed427cdd6be8-kube-api-access-qtw9k\") pod \"calico-kube-controllers-5b6b58f89d-g52xg\" (UID: \"43ddfd49-802e-4437-b6f0-ed427cdd6be8\") " pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:30.239740 kubelet[2894]: E0317 17:40:30.239543 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:30.241047 containerd[1595]: time="2025-03-17T17:40:30.240893988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:30.253905 containerd[1595]: time="2025-03-17T17:40:30.253832836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:40:30.269333 containerd[1595]: time="2025-03-17T17:40:30.269269878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:40:30.270794 kubelet[2894]: E0317 17:40:30.270684 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:30.272397 containerd[1595]: time="2025-03-17T17:40:30.271953870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:30.276791 containerd[1595]: time="2025-03-17T17:40:30.276745767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:0,}" Mar 17 17:40:30.487475 containerd[1595]: time="2025-03-17T17:40:30.487412823Z" level=error msg="Failed to destroy network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.494759 containerd[1595]: time="2025-03-17T17:40:30.493030728Z" level=error msg="Failed to destroy network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.497159 containerd[1595]: time="2025-03-17T17:40:30.497103973Z" level=error msg="encountered an error cleaning up failed sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.497322 containerd[1595]: time="2025-03-17T17:40:30.497198134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.500469 containerd[1595]: time="2025-03-17T17:40:30.499974624Z" level=error msg="encountered an error cleaning up failed sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.501087 containerd[1595]: time="2025-03-17T17:40:30.501057828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.507950 kubelet[2894]: E0317 17:40:30.507868 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.508172 kubelet[2894]: E0317 17:40:30.507983 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:30.508172 kubelet[2894]: E0317 17:40:30.507897 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.508172 kubelet[2894]: E0317 17:40:30.508114 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:30.508288 kubelet[2894]: E0317 17:40:30.508021 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:30.508330 kubelet[2894]: E0317 17:40:30.508290 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" podUID="05bc58a2-8b10-4350-b41e-7b091d9a3a8c" Mar 17 17:40:30.509178 kubelet[2894]: E0317 17:40:30.508733 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:30.509178 kubelet[2894]: E0317 17:40:30.508782 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5xpt7" podUID="1cbd3c90-0c66-408d-9e5d-1382eccfbde6" Mar 17 17:40:30.510504 containerd[1595]: time="2025-03-17T17:40:30.510466385Z" level=error msg="Failed to destroy network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.511793 containerd[1595]: time="2025-03-17T17:40:30.511769261Z" level=error msg="encountered an error cleaning up failed sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.512076 containerd[1595]: time="2025-03-17T17:40:30.512055411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.512378 containerd[1595]: time="2025-03-17T17:40:30.512329208Z" level=error msg="Failed to destroy network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.512516 kubelet[2894]: E0317 17:40:30.512435 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.512516 kubelet[2894]: E0317 17:40:30.512509 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:30.512597 kubelet[2894]: E0317 17:40:30.512530 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:30.512597 kubelet[2894]: E0317 17:40:30.512575 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" podUID="43ddfd49-802e-4437-b6f0-ed427cdd6be8" Mar 17 17:40:30.512823 containerd[1595]: time="2025-03-17T17:40:30.512794723Z" level=error msg="encountered an error cleaning up failed sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.512883 containerd[1595]: time="2025-03-17T17:40:30.512842184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.513157 kubelet[2894]: E0317 17:40:30.513100 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.513233 kubelet[2894]: E0317 17:40:30.513194 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:30.513319 kubelet[2894]: E0317 17:40:30.513243 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:30.513412 kubelet[2894]: E0317 17:40:30.513369 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" podUID="c6ebfa09-1d89-41a1-975e-0d041b544630" Mar 17 17:40:30.514370 containerd[1595]: time="2025-03-17T17:40:30.514300159Z" level=error msg="Failed to destroy network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.514753 containerd[1595]: time="2025-03-17T17:40:30.514705067Z" level=error msg="encountered an error cleaning up failed sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.514804 containerd[1595]: time="2025-03-17T17:40:30.514766105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.515054 kubelet[2894]: E0317 17:40:30.514999 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.515136 kubelet[2894]: E0317 17:40:30.515059 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:30.515136 kubelet[2894]: E0317 17:40:30.515080 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:30.515210 kubelet[2894]: E0317 17:40:30.515128 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j5l2k" podUID="e68c1525-3bc8-4435-a253-fa308a8e7604" Mar 17 17:40:30.662391 kubelet[2894]: I0317 17:40:30.661927 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f" Mar 17 17:40:30.665154 kubelet[2894]: I0317 17:40:30.664520 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac" Mar 17 17:40:30.666303 containerd[1595]: time="2025-03-17T17:40:30.665840658Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\"" Mar 17 17:40:30.667668 containerd[1595]: time="2025-03-17T17:40:30.667636452Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\"" Mar 17 17:40:30.668507 kubelet[2894]: I0317 17:40:30.667763 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd" Mar 17 17:40:30.669740 containerd[1595]: time="2025-03-17T17:40:30.669292447Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" Mar 17 17:40:30.670965 kubelet[2894]: I0317 17:40:30.670932 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33" Mar 17 17:40:30.671805 containerd[1595]: time="2025-03-17T17:40:30.671757458Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\"" Mar 17 17:40:30.684425 containerd[1595]: time="2025-03-17T17:40:30.684110719Z" level=info msg="Ensure that sandbox e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac in task-service has been cleanup successfully" Mar 17 17:40:30.684668 kubelet[2894]: I0317 17:40:30.684210 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145" Mar 17 17:40:30.684841 containerd[1595]: time="2025-03-17T17:40:30.684779114Z" level=info msg="TearDown network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" successfully" Mar 17 17:40:30.684841 containerd[1595]: time="2025-03-17T17:40:30.684805957Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" returns successfully" Mar 17 17:40:30.685180 kubelet[2894]: E0317 17:40:30.685157 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:30.685515 containerd[1595]: time="2025-03-17T17:40:30.685491886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:1,}" Mar 17 17:40:30.692164 containerd[1595]: time="2025-03-17T17:40:30.692020994Z" level=info msg="Ensure that sandbox fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd in task-service has been cleanup successfully" Mar 17 17:40:30.692422 containerd[1595]: time="2025-03-17T17:40:30.692344736Z" level=info msg="TearDown network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" successfully" Mar 17 17:40:30.692422 containerd[1595]: time="2025-03-17T17:40:30.692363342Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" returns successfully" Mar 17 17:40:30.693010 containerd[1595]: time="2025-03-17T17:40:30.692603394Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\"" Mar 17 17:40:30.693010 containerd[1595]: time="2025-03-17T17:40:30.692853475Z" level=info msg="Ensure that sandbox 365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145 in task-service has been cleanup successfully" Mar 17 17:40:30.693261 containerd[1595]: time="2025-03-17T17:40:30.693243115Z" level=info msg="TearDown network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" successfully" Mar 17 17:40:30.693429 containerd[1595]: time="2025-03-17T17:40:30.693322327Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" returns successfully" Mar 17 17:40:30.693828 kubelet[2894]: E0317 17:40:30.693752 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:30.693895 containerd[1595]: time="2025-03-17T17:40:30.693758666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:40:30.694052 containerd[1595]: time="2025-03-17T17:40:30.694030188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:1,}" Mar 17 17:40:30.697535 kubelet[2894]: E0317 17:40:30.697499 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:30.699654 containerd[1595]: time="2025-03-17T17:40:30.699594080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:40:30.715641 containerd[1595]: time="2025-03-17T17:40:30.715573486Z" level=info msg="Ensure that sandbox 6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f in task-service has been cleanup successfully" Mar 17 17:40:30.715916 containerd[1595]: time="2025-03-17T17:40:30.715869515Z" level=info msg="TearDown network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" successfully" Mar 17 17:40:30.715916 containerd[1595]: time="2025-03-17T17:40:30.715890315Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" returns successfully" Mar 17 17:40:30.716596 containerd[1595]: time="2025-03-17T17:40:30.716573118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:1,}" Mar 17 17:40:30.720721 containerd[1595]: time="2025-03-17T17:40:30.720669688Z" level=info msg="Ensure that sandbox f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33 in task-service has been cleanup successfully" Mar 17 17:40:30.720922 containerd[1595]: time="2025-03-17T17:40:30.720895803Z" level=info msg="TearDown network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" successfully" Mar 17 17:40:30.720922 containerd[1595]: time="2025-03-17T17:40:30.720914579Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" returns successfully" Mar 17 17:40:30.721622 containerd[1595]: time="2025-03-17T17:40:30.721582103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:40:30.800483 containerd[1595]: time="2025-03-17T17:40:30.799989384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:0,}" Mar 17 17:40:30.896934 containerd[1595]: time="2025-03-17T17:40:30.895767098Z" level=error msg="Failed to destroy network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.896934 containerd[1595]: time="2025-03-17T17:40:30.896374175Z" level=error msg="encountered an error cleaning up failed sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.896934 containerd[1595]: time="2025-03-17T17:40:30.896443840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.901097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c-shm.mount: Deactivated successfully. Mar 17 17:40:30.903750 kubelet[2894]: E0317 17:40:30.903281 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.903750 kubelet[2894]: E0317 17:40:30.903358 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:30.903750 kubelet[2894]: E0317 17:40:30.903386 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:30.904423 kubelet[2894]: E0317 17:40:30.903437 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" podUID="05bc58a2-8b10-4350-b41e-7b091d9a3a8c" Mar 17 17:40:30.937741 containerd[1595]: time="2025-03-17T17:40:30.937670486Z" level=error msg="Failed to destroy network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.943754 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633-shm.mount: Deactivated successfully. Mar 17 17:40:30.947111 containerd[1595]: time="2025-03-17T17:40:30.947043745Z" level=error msg="encountered an error cleaning up failed sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.947209 containerd[1595]: time="2025-03-17T17:40:30.947157484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.953505 kubelet[2894]: E0317 17:40:30.949092 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.953505 kubelet[2894]: E0317 17:40:30.949179 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:30.953505 kubelet[2894]: E0317 17:40:30.949217 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:30.953831 kubelet[2894]: E0317 17:40:30.949341 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5xpt7" podUID="1cbd3c90-0c66-408d-9e5d-1382eccfbde6" Mar 17 17:40:30.964453 containerd[1595]: time="2025-03-17T17:40:30.964360473Z" level=error msg="Failed to destroy network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.966449 containerd[1595]: time="2025-03-17T17:40:30.966398131Z" level=error msg="encountered an error cleaning up failed sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.966548 containerd[1595]: time="2025-03-17T17:40:30.966483426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.966838 kubelet[2894]: E0317 17:40:30.966788 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.967016 kubelet[2894]: E0317 17:40:30.966866 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:30.967016 kubelet[2894]: E0317 17:40:30.966893 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:30.969765 kubelet[2894]: E0317 17:40:30.966957 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" podUID="43ddfd49-802e-4437-b6f0-ed427cdd6be8" Mar 17 17:40:30.975073 containerd[1595]: time="2025-03-17T17:40:30.974955281Z" level=error msg="Failed to destroy network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.978670 containerd[1595]: time="2025-03-17T17:40:30.978441066Z" level=error msg="Failed to destroy network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.982373 containerd[1595]: time="2025-03-17T17:40:30.982055989Z" level=error msg="encountered an error cleaning up failed sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.982373 containerd[1595]: time="2025-03-17T17:40:30.982158055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.983905 kubelet[2894]: E0317 17:40:30.982707 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.983905 kubelet[2894]: E0317 17:40:30.982797 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:30.983905 kubelet[2894]: E0317 17:40:30.982824 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:30.984087 containerd[1595]: time="2025-03-17T17:40:30.982859985Z" level=error msg="encountered an error cleaning up failed sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.984087 containerd[1595]: time="2025-03-17T17:40:30.982960358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.984190 kubelet[2894]: E0317 17:40:30.982876 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j5l2k" podUID="e68c1525-3bc8-4435-a253-fa308a8e7604" Mar 17 17:40:30.984384 kubelet[2894]: E0317 17:40:30.984358 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:30.984603 kubelet[2894]: E0317 17:40:30.984505 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:30.984603 kubelet[2894]: E0317 17:40:30.984532 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:30.984603 kubelet[2894]: E0317 17:40:30.984569 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" podUID="c6ebfa09-1d89-41a1-975e-0d041b544630" Mar 17 17:40:31.053552 containerd[1595]: time="2025-03-17T17:40:31.053287322Z" level=error msg="Failed to destroy network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.053873 containerd[1595]: time="2025-03-17T17:40:31.053835766Z" level=error msg="encountered an error cleaning up failed sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.053938 containerd[1595]: time="2025-03-17T17:40:31.053911693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.054304 kubelet[2894]: E0317 17:40:31.054241 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.054515 kubelet[2894]: E0317 17:40:31.054472 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:31.054515 kubelet[2894]: E0317 17:40:31.054512 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:31.055001 kubelet[2894]: E0317 17:40:31.054592 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-24zxx_calico-system(e6243402-8f9c-4b35-b2c7-317fe823ae81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-24zxx_calico-system(e6243402-8f9c-4b35-b2c7-317fe823ae81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:31.701000 kubelet[2894]: I0317 17:40:31.700954 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633" Mar 17 17:40:31.701894 containerd[1595]: time="2025-03-17T17:40:31.701746701Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\"" Mar 17 17:40:31.702031 containerd[1595]: time="2025-03-17T17:40:31.701988356Z" level=info msg="Ensure that sandbox 20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633 in task-service has been cleanup successfully" Mar 17 17:40:31.702120 kubelet[2894]: I0317 17:40:31.701913 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd" Mar 17 17:40:31.702599 containerd[1595]: time="2025-03-17T17:40:31.702553962Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\"" Mar 17 17:40:31.702781 containerd[1595]: time="2025-03-17T17:40:31.702672961Z" level=info msg="TearDown network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" successfully" Mar 17 17:40:31.702781 containerd[1595]: time="2025-03-17T17:40:31.702693541Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" returns successfully" Mar 17 17:40:31.703000 containerd[1595]: time="2025-03-17T17:40:31.702975683Z" level=info msg="Ensure that sandbox ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd in task-service has been cleanup successfully" Mar 17 17:40:31.703521 containerd[1595]: time="2025-03-17T17:40:31.703488239Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\"" Mar 17 17:40:31.703660 containerd[1595]: time="2025-03-17T17:40:31.703600294Z" level=info msg="TearDown network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" successfully" Mar 17 17:40:31.703660 containerd[1595]: time="2025-03-17T17:40:31.703616996Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" returns successfully" Mar 17 17:40:31.703816 kubelet[2894]: I0317 17:40:31.703792 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26" Mar 17 17:40:31.703906 kubelet[2894]: E0317 17:40:31.703865 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:31.704347 containerd[1595]: time="2025-03-17T17:40:31.704302153Z" level=info msg="TearDown network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" successfully" Mar 17 17:40:31.704498 containerd[1595]: time="2025-03-17T17:40:31.704427464Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" returns successfully" Mar 17 17:40:31.705458 containerd[1595]: time="2025-03-17T17:40:31.705402187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:2,}" Mar 17 17:40:31.705579 containerd[1595]: time="2025-03-17T17:40:31.705433297Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\"" Mar 17 17:40:31.705794 containerd[1595]: time="2025-03-17T17:40:31.705442345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:1,}" Mar 17 17:40:31.705997 containerd[1595]: time="2025-03-17T17:40:31.705968015Z" level=info msg="Ensure that sandbox 42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26 in task-service has been cleanup successfully" Mar 17 17:40:31.706610 containerd[1595]: time="2025-03-17T17:40:31.706494157Z" level=info msg="TearDown network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" successfully" Mar 17 17:40:31.706610 containerd[1595]: time="2025-03-17T17:40:31.706518253Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" returns successfully" Mar 17 17:40:31.706859 containerd[1595]: time="2025-03-17T17:40:31.706829160Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\"" Mar 17 17:40:31.707508 containerd[1595]: time="2025-03-17T17:40:31.706932158Z" level=info msg="TearDown network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" successfully" Mar 17 17:40:31.707508 containerd[1595]: time="2025-03-17T17:40:31.706947237Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" returns successfully" Mar 17 17:40:31.707508 containerd[1595]: time="2025-03-17T17:40:31.707469491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:2,}" Mar 17 17:40:31.707609 kubelet[2894]: I0317 17:40:31.707048 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7" Mar 17 17:40:31.707649 containerd[1595]: time="2025-03-17T17:40:31.707535799Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\"" Mar 17 17:40:31.709007 kubelet[2894]: I0317 17:40:31.708971 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad" Mar 17 17:40:31.709957 containerd[1595]: time="2025-03-17T17:40:31.709561662Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\"" Mar 17 17:40:31.709957 containerd[1595]: time="2025-03-17T17:40:31.709803568Z" level=info msg="Ensure that sandbox a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad in task-service has been cleanup successfully" Mar 17 17:40:31.710187 containerd[1595]: time="2025-03-17T17:40:31.710147278Z" level=info msg="TearDown network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" successfully" Mar 17 17:40:31.710187 containerd[1595]: time="2025-03-17T17:40:31.710172606Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" returns successfully" Mar 17 17:40:31.710444 kubelet[2894]: I0317 17:40:31.710402 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c" Mar 17 17:40:31.710577 containerd[1595]: time="2025-03-17T17:40:31.710548279Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" Mar 17 17:40:31.710671 containerd[1595]: time="2025-03-17T17:40:31.710647820Z" level=info msg="TearDown network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" successfully" Mar 17 17:40:31.710671 containerd[1595]: time="2025-03-17T17:40:31.710668460Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" returns successfully" Mar 17 17:40:31.711082 containerd[1595]: time="2025-03-17T17:40:31.711043721Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\"" Mar 17 17:40:31.711266 containerd[1595]: time="2025-03-17T17:40:31.711095641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:40:31.711410 containerd[1595]: time="2025-03-17T17:40:31.711383915Z" level=info msg="Ensure that sandbox b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c in task-service has been cleanup successfully" Mar 17 17:40:31.711614 containerd[1595]: time="2025-03-17T17:40:31.711590061Z" level=info msg="TearDown network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" successfully" Mar 17 17:40:31.711614 containerd[1595]: time="2025-03-17T17:40:31.711610280Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" returns successfully" Mar 17 17:40:31.712085 containerd[1595]: time="2025-03-17T17:40:31.712038653Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\"" Mar 17 17:40:31.712199 containerd[1595]: time="2025-03-17T17:40:31.712176959Z" level=info msg="TearDown network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" successfully" Mar 17 17:40:31.712271 containerd[1595]: time="2025-03-17T17:40:31.712197108Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" returns successfully" Mar 17 17:40:31.712681 containerd[1595]: time="2025-03-17T17:40:31.712647734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:40:31.718565 containerd[1595]: time="2025-03-17T17:40:31.718500454Z" level=info msg="Ensure that sandbox 9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7 in task-service has been cleanup successfully" Mar 17 17:40:31.718809 containerd[1595]: time="2025-03-17T17:40:31.718774912Z" level=info msg="TearDown network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" successfully" Mar 17 17:40:31.718809 containerd[1595]: time="2025-03-17T17:40:31.718793919Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" returns successfully" Mar 17 17:40:31.719366 containerd[1595]: time="2025-03-17T17:40:31.719328155Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\"" Mar 17 17:40:31.719474 containerd[1595]: time="2025-03-17T17:40:31.719446282Z" level=info msg="TearDown network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" successfully" Mar 17 17:40:31.719474 containerd[1595]: time="2025-03-17T17:40:31.719468905Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" returns successfully" Mar 17 17:40:31.719827 kubelet[2894]: E0317 17:40:31.719781 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:31.720321 containerd[1595]: time="2025-03-17T17:40:31.720283391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:2,}" Mar 17 17:40:31.783035 systemd[1]: run-netns-cni\x2d928c0265\x2d25d4\x2d1a4e\x2d0874\x2daf6c17d8aee5.mount: Deactivated successfully. Mar 17 17:40:31.783282 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd-shm.mount: Deactivated successfully. Mar 17 17:40:31.783466 systemd[1]: run-netns-cni\x2db72bef6f\x2d9b26\x2d4f0f\x2de607\x2dedcd477877b5.mount: Deactivated successfully. Mar 17 17:40:31.783647 systemd[1]: run-netns-cni\x2d5f242765\x2d8fc0\x2dd6d3\x2d5477\x2d320f02ac7ffb.mount: Deactivated successfully. Mar 17 17:40:31.783831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26-shm.mount: Deactivated successfully. Mar 17 17:40:31.784052 systemd[1]: run-netns-cni\x2d9f868e9e\x2da661\x2da4b6\x2d5767\x2da0f1fb88231a.mount: Deactivated successfully. Mar 17 17:40:31.784202 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad-shm.mount: Deactivated successfully. Mar 17 17:40:31.784392 systemd[1]: run-netns-cni\x2d813c8602\x2dde75\x2d936a\x2d5e65\x2d23435b4f06b0.mount: Deactivated successfully. Mar 17 17:40:31.784558 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7-shm.mount: Deactivated successfully. Mar 17 17:40:31.784727 systemd[1]: run-netns-cni\x2d4acbc5f0\x2d64b0\x2d2761\x2d5dd5\x2ddd64a25f9206.mount: Deactivated successfully. Mar 17 17:40:31.941002 containerd[1595]: time="2025-03-17T17:40:31.940932272Z" level=error msg="Failed to destroy network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.942636 containerd[1595]: time="2025-03-17T17:40:31.942594316Z" level=error msg="encountered an error cleaning up failed sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.942701 containerd[1595]: time="2025-03-17T17:40:31.942671806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.943943 kubelet[2894]: E0317 17:40:31.942967 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.943943 kubelet[2894]: E0317 17:40:31.943056 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:31.943943 kubelet[2894]: E0317 17:40:31.943080 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:31.944468 kubelet[2894]: E0317 17:40:31.943121 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-24zxx_calico-system(e6243402-8f9c-4b35-b2c7-317fe823ae81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-24zxx_calico-system(e6243402-8f9c-4b35-b2c7-317fe823ae81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:31.946659 containerd[1595]: time="2025-03-17T17:40:31.946606619Z" level=error msg="Failed to destroy network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.948136 containerd[1595]: time="2025-03-17T17:40:31.948099138Z" level=error msg="encountered an error cleaning up failed sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.948216 containerd[1595]: time="2025-03-17T17:40:31.948177859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.948691 containerd[1595]: time="2025-03-17T17:40:31.948483417Z" level=error msg="Failed to destroy network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.948795 kubelet[2894]: E0317 17:40:31.948749 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.948853 kubelet[2894]: E0317 17:40:31.948819 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:31.948853 kubelet[2894]: E0317 17:40:31.948841 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:31.948912 kubelet[2894]: E0317 17:40:31.948894 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5xpt7" podUID="1cbd3c90-0c66-408d-9e5d-1382eccfbde6" Mar 17 17:40:31.949742 containerd[1595]: time="2025-03-17T17:40:31.949555136Z" level=error msg="encountered an error cleaning up failed sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.949742 containerd[1595]: time="2025-03-17T17:40:31.949617797Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.950160 kubelet[2894]: E0317 17:40:31.950125 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.950205 kubelet[2894]: E0317 17:40:31.950170 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:31.950205 kubelet[2894]: E0317 17:40:31.950191 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:31.950415 kubelet[2894]: E0317 17:40:31.950241 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" podUID="c6ebfa09-1d89-41a1-975e-0d041b544630" Mar 17 17:40:31.963396 containerd[1595]: time="2025-03-17T17:40:31.962631692Z" level=error msg="Failed to destroy network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.963396 containerd[1595]: time="2025-03-17T17:40:31.963154707Z" level=error msg="encountered an error cleaning up failed sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.963396 containerd[1595]: time="2025-03-17T17:40:31.963253937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.967296 kubelet[2894]: E0317 17:40:31.966005 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.967296 kubelet[2894]: E0317 17:40:31.966109 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:31.967296 kubelet[2894]: E0317 17:40:31.966138 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:31.967524 kubelet[2894]: E0317 17:40:31.966206 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j5l2k" podUID="e68c1525-3bc8-4435-a253-fa308a8e7604" Mar 17 17:40:31.976392 containerd[1595]: time="2025-03-17T17:40:31.976334611Z" level=error msg="Failed to destroy network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.976795 containerd[1595]: time="2025-03-17T17:40:31.976760128Z" level=error msg="encountered an error cleaning up failed sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.976847 containerd[1595]: time="2025-03-17T17:40:31.976823069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.977568 kubelet[2894]: E0317 17:40:31.977093 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.977568 kubelet[2894]: E0317 17:40:31.977178 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:31.977568 kubelet[2894]: E0317 17:40:31.977204 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:31.977719 kubelet[2894]: E0317 17:40:31.977291 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" podUID="05bc58a2-8b10-4350-b41e-7b091d9a3a8c" Mar 17 17:40:31.981013 containerd[1595]: time="2025-03-17T17:40:31.980947287Z" level=error msg="Failed to destroy network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.981524 containerd[1595]: time="2025-03-17T17:40:31.981493958Z" level=error msg="encountered an error cleaning up failed sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.981595 containerd[1595]: time="2025-03-17T17:40:31.981561998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.981855 kubelet[2894]: E0317 17:40:31.981807 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:31.981902 kubelet[2894]: E0317 17:40:31.981876 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:31.981902 kubelet[2894]: E0317 17:40:31.981898 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:31.981970 kubelet[2894]: E0317 17:40:31.981946 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" podUID="43ddfd49-802e-4437-b6f0-ed427cdd6be8" Mar 17 17:40:32.715693 kubelet[2894]: I0317 17:40:32.715643 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9" Mar 17 17:40:32.716797 containerd[1595]: time="2025-03-17T17:40:32.716382165Z" level=info msg="StopPodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\"" Mar 17 17:40:32.716797 containerd[1595]: time="2025-03-17T17:40:32.716625483Z" level=info msg="Ensure that sandbox 34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9 in task-service has been cleanup successfully" Mar 17 17:40:32.717724 containerd[1595]: time="2025-03-17T17:40:32.717703753Z" level=info msg="TearDown network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" successfully" Mar 17 17:40:32.717829 containerd[1595]: time="2025-03-17T17:40:32.717816700Z" level=info msg="StopPodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" returns successfully" Mar 17 17:40:32.718361 containerd[1595]: time="2025-03-17T17:40:32.718344294Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\"" Mar 17 17:40:32.718878 containerd[1595]: time="2025-03-17T17:40:32.718839505Z" level=info msg="TearDown network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" successfully" Mar 17 17:40:32.719097 containerd[1595]: time="2025-03-17T17:40:32.719082282Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" returns successfully" Mar 17 17:40:32.719643 containerd[1595]: time="2025-03-17T17:40:32.719616307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:2,}" Mar 17 17:40:32.719999 kubelet[2894]: I0317 17:40:32.719964 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38" Mar 17 17:40:32.722253 kubelet[2894]: I0317 17:40:32.721723 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716" Mar 17 17:40:32.722776 containerd[1595]: time="2025-03-17T17:40:32.722418901Z" level=info msg="StopPodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\"" Mar 17 17:40:32.722776 containerd[1595]: time="2025-03-17T17:40:32.722624876Z" level=info msg="Ensure that sandbox 4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716 in task-service has been cleanup successfully" Mar 17 17:40:32.723238 containerd[1595]: time="2025-03-17T17:40:32.723196635Z" level=info msg="TearDown network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" successfully" Mar 17 17:40:32.723316 containerd[1595]: time="2025-03-17T17:40:32.723296196Z" level=info msg="StopPodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" returns successfully" Mar 17 17:40:32.724240 containerd[1595]: time="2025-03-17T17:40:32.724192388Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\"" Mar 17 17:40:32.724432 containerd[1595]: time="2025-03-17T17:40:32.724348338Z" level=info msg="TearDown network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" successfully" Mar 17 17:40:32.724432 containerd[1595]: time="2025-03-17T17:40:32.724404084Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" returns successfully" Mar 17 17:40:32.724619 kubelet[2894]: I0317 17:40:32.724600 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0" Mar 17 17:40:32.725146 containerd[1595]: time="2025-03-17T17:40:32.725119058Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\"" Mar 17 17:40:32.725267 containerd[1595]: time="2025-03-17T17:40:32.725215313Z" level=info msg="TearDown network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" successfully" Mar 17 17:40:32.725297 containerd[1595]: time="2025-03-17T17:40:32.725266771Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" returns successfully" Mar 17 17:40:32.725428 containerd[1595]: time="2025-03-17T17:40:32.725384848Z" level=info msg="StopPodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\"" Mar 17 17:40:32.725568 containerd[1595]: time="2025-03-17T17:40:32.725546489Z" level=info msg="Ensure that sandbox a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0 in task-service has been cleanup successfully" Mar 17 17:40:32.726784 containerd[1595]: time="2025-03-17T17:40:32.725747395Z" level=info msg="StopPodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\"" Mar 17 17:40:32.726784 containerd[1595]: time="2025-03-17T17:40:32.726532843Z" level=info msg="Ensure that sandbox e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38 in task-service has been cleanup successfully" Mar 17 17:40:32.726784 containerd[1595]: time="2025-03-17T17:40:32.726537933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:3,}" Mar 17 17:40:32.727071 containerd[1595]: time="2025-03-17T17:40:32.727049676Z" level=info msg="TearDown network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" successfully" Mar 17 17:40:32.727282 containerd[1595]: time="2025-03-17T17:40:32.727159908Z" level=info msg="StopPodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" returns successfully" Mar 17 17:40:32.727282 containerd[1595]: time="2025-03-17T17:40:32.727052181Z" level=info msg="TearDown network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" successfully" Mar 17 17:40:32.727282 containerd[1595]: time="2025-03-17T17:40:32.727212168Z" level=info msg="StopPodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" returns successfully" Mar 17 17:40:32.728082 containerd[1595]: time="2025-03-17T17:40:32.728055969Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\"" Mar 17 17:40:32.728170 containerd[1595]: time="2025-03-17T17:40:32.728141012Z" level=info msg="TearDown network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" successfully" Mar 17 17:40:32.728170 containerd[1595]: time="2025-03-17T17:40:32.728150471Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" returns successfully" Mar 17 17:40:32.729374 kubelet[2894]: I0317 17:40:32.729098 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f" Mar 17 17:40:32.729460 containerd[1595]: time="2025-03-17T17:40:32.729283267Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\"" Mar 17 17:40:32.729930 containerd[1595]: time="2025-03-17T17:40:32.729907075Z" level=info msg="TearDown network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" successfully" Mar 17 17:40:32.729930 containerd[1595]: time="2025-03-17T17:40:32.729926953Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" returns successfully" Mar 17 17:40:32.730362 kubelet[2894]: E0317 17:40:32.730195 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:32.730590 containerd[1595]: time="2025-03-17T17:40:32.730565741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:3,}" Mar 17 17:40:32.731156 containerd[1595]: time="2025-03-17T17:40:32.731130766Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\"" Mar 17 17:40:32.731382 containerd[1595]: time="2025-03-17T17:40:32.731351560Z" level=info msg="Ensure that sandbox f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f in task-service has been cleanup successfully" Mar 17 17:40:32.732171 containerd[1595]: time="2025-03-17T17:40:32.731952435Z" level=info msg="TearDown network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" successfully" Mar 17 17:40:32.732171 containerd[1595]: time="2025-03-17T17:40:32.731989996Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" returns successfully" Mar 17 17:40:32.732954 containerd[1595]: time="2025-03-17T17:40:32.732863574Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\"" Mar 17 17:40:32.733741 containerd[1595]: time="2025-03-17T17:40:32.733692476Z" level=info msg="TearDown network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" successfully" Mar 17 17:40:32.733850 containerd[1595]: time="2025-03-17T17:40:32.733799742Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" returns successfully" Mar 17 17:40:32.733899 kubelet[2894]: I0317 17:40:32.733878 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20" Mar 17 17:40:32.734548 containerd[1595]: time="2025-03-17T17:40:32.734478767Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\"" Mar 17 17:40:32.734548 containerd[1595]: time="2025-03-17T17:40:32.734539012Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" Mar 17 17:40:32.734615 containerd[1595]: time="2025-03-17T17:40:32.734581764Z" level=info msg="TearDown network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" successfully" Mar 17 17:40:32.734615 containerd[1595]: time="2025-03-17T17:40:32.734597445Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" returns successfully" Mar 17 17:40:32.734673 containerd[1595]: time="2025-03-17T17:40:32.734645898Z" level=info msg="StopPodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\"" Mar 17 17:40:32.734747 containerd[1595]: time="2025-03-17T17:40:32.734719349Z" level=info msg="TearDown network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" successfully" Mar 17 17:40:32.734747 containerd[1595]: time="2025-03-17T17:40:32.734742393Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" returns successfully" Mar 17 17:40:32.734842 containerd[1595]: time="2025-03-17T17:40:32.734827707Z" level=info msg="Ensure that sandbox 49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20 in task-service has been cleanup successfully" Mar 17 17:40:32.735418 containerd[1595]: time="2025-03-17T17:40:32.735390789Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\"" Mar 17 17:40:32.735497 containerd[1595]: time="2025-03-17T17:40:32.735481313Z" level=info msg="TearDown network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" successfully" Mar 17 17:40:32.735521 containerd[1595]: time="2025-03-17T17:40:32.735496592Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" returns successfully" Mar 17 17:40:32.735619 containerd[1595]: time="2025-03-17T17:40:32.735596955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:40:32.736010 containerd[1595]: time="2025-03-17T17:40:32.735880219Z" level=info msg="TearDown network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" successfully" Mar 17 17:40:32.736010 containerd[1595]: time="2025-03-17T17:40:32.735898123Z" level=info msg="StopPodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" returns successfully" Mar 17 17:40:32.736080 kubelet[2894]: E0317 17:40:32.735919 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:32.736366 containerd[1595]: time="2025-03-17T17:40:32.736197859Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\"" Mar 17 17:40:32.736366 containerd[1595]: time="2025-03-17T17:40:32.736303042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:3,}" Mar 17 17:40:32.736366 containerd[1595]: time="2025-03-17T17:40:32.736315164Z" level=info msg="TearDown network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" successfully" Mar 17 17:40:32.736366 containerd[1595]: time="2025-03-17T17:40:32.736330995Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" returns successfully" Mar 17 17:40:32.736820 containerd[1595]: time="2025-03-17T17:40:32.736663253Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\"" Mar 17 17:40:32.736820 containerd[1595]: time="2025-03-17T17:40:32.736752534Z" level=info msg="TearDown network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" successfully" Mar 17 17:40:32.736820 containerd[1595]: time="2025-03-17T17:40:32.736765540Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" returns successfully" Mar 17 17:40:32.737193 containerd[1595]: time="2025-03-17T17:40:32.737138506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:40:32.783130 systemd[1]: run-netns-cni\x2d67108575\x2d03e6\x2d2b38\x2d2a47\x2d257f3f6df661.mount: Deactivated successfully. Mar 17 17:40:32.783395 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9-shm.mount: Deactivated successfully. Mar 17 17:40:32.783586 systemd[1]: run-netns-cni\x2d6dc2fb25\x2dc30b\x2d4ca9\x2d8d41\x2d5877baae4f91.mount: Deactivated successfully. Mar 17 17:40:32.783740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f-shm.mount: Deactivated successfully. Mar 17 17:40:32.783881 systemd[1]: run-netns-cni\x2dce87d476\x2dfd7d\x2ddf3b\x2d11da\x2dbb73e90d0d60.mount: Deactivated successfully. Mar 17 17:40:32.784042 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38-shm.mount: Deactivated successfully. Mar 17 17:40:32.784187 systemd[1]: run-netns-cni\x2dce2d02e5\x2d3613\x2d1882\x2d60fe\x2d48956709ba61.mount: Deactivated successfully. Mar 17 17:40:32.784362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716-shm.mount: Deactivated successfully. Mar 17 17:40:33.451454 containerd[1595]: time="2025-03-17T17:40:33.451392681Z" level=error msg="Failed to destroy network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.451902 containerd[1595]: time="2025-03-17T17:40:33.451876258Z" level=error msg="encountered an error cleaning up failed sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.451971 containerd[1595]: time="2025-03-17T17:40:33.451938979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.452443 kubelet[2894]: E0317 17:40:33.452398 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.452730 kubelet[2894]: E0317 17:40:33.452475 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:33.452730 kubelet[2894]: E0317 17:40:33.452497 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:33.452730 kubelet[2894]: E0317 17:40:33.452536 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" podUID="c6ebfa09-1d89-41a1-975e-0d041b544630" Mar 17 17:40:33.469879 containerd[1595]: time="2025-03-17T17:40:33.469817140Z" level=error msg="Failed to destroy network for sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.470574 containerd[1595]: time="2025-03-17T17:40:33.470453772Z" level=error msg="encountered an error cleaning up failed sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.470725 containerd[1595]: time="2025-03-17T17:40:33.470606154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.471008 kubelet[2894]: E0317 17:40:33.470959 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.471073 kubelet[2894]: E0317 17:40:33.471020 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:33.471073 kubelet[2894]: E0317 17:40:33.471040 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:33.471149 kubelet[2894]: E0317 17:40:33.471086 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" podUID="05bc58a2-8b10-4350-b41e-7b091d9a3a8c" Mar 17 17:40:33.476018 containerd[1595]: time="2025-03-17T17:40:33.475975182Z" level=error msg="Failed to destroy network for sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.476478 containerd[1595]: time="2025-03-17T17:40:33.476444693Z" level=error msg="encountered an error cleaning up failed sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.476563 containerd[1595]: time="2025-03-17T17:40:33.476502754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.478258 kubelet[2894]: E0317 17:40:33.476697 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.478258 kubelet[2894]: E0317 17:40:33.476779 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:33.478258 kubelet[2894]: E0317 17:40:33.476804 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:33.478438 containerd[1595]: time="2025-03-17T17:40:33.476999568Z" level=error msg="Failed to destroy network for sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.478438 containerd[1595]: time="2025-03-17T17:40:33.477411809Z" level=error msg="encountered an error cleaning up failed sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.478438 containerd[1595]: time="2025-03-17T17:40:33.477461434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.478517 kubelet[2894]: E0317 17:40:33.476856 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j5l2k" podUID="e68c1525-3bc8-4435-a253-fa308a8e7604" Mar 17 17:40:33.478517 kubelet[2894]: E0317 17:40:33.477607 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.478517 kubelet[2894]: E0317 17:40:33.477644 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:33.478625 kubelet[2894]: E0317 17:40:33.477660 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:33.478625 kubelet[2894]: E0317 17:40:33.477688 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-24zxx_calico-system(e6243402-8f9c-4b35-b2c7-317fe823ae81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-24zxx_calico-system(e6243402-8f9c-4b35-b2c7-317fe823ae81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:33.482966 containerd[1595]: time="2025-03-17T17:40:33.482923050Z" level=error msg="Failed to destroy network for sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.483727 containerd[1595]: time="2025-03-17T17:40:33.483702235Z" level=error msg="encountered an error cleaning up failed sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.483786 containerd[1595]: time="2025-03-17T17:40:33.483756439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.483965 kubelet[2894]: E0317 17:40:33.483930 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.484020 kubelet[2894]: E0317 17:40:33.483990 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:33.484077 kubelet[2894]: E0317 17:40:33.484016 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:33.484140 kubelet[2894]: E0317 17:40:33.484076 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" podUID="43ddfd49-802e-4437-b6f0-ed427cdd6be8" Mar 17 17:40:33.491540 containerd[1595]: time="2025-03-17T17:40:33.491488232Z" level=error msg="Failed to destroy network for sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.492255 containerd[1595]: time="2025-03-17T17:40:33.492057665Z" level=error msg="encountered an error cleaning up failed sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.492255 containerd[1595]: time="2025-03-17T17:40:33.492165512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.492539 kubelet[2894]: E0317 17:40:33.492497 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.492613 kubelet[2894]: E0317 17:40:33.492552 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:33.492613 kubelet[2894]: E0317 17:40:33.492580 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:33.492679 kubelet[2894]: E0317 17:40:33.492627 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5xpt7" podUID="1cbd3c90-0c66-408d-9e5d-1382eccfbde6" Mar 17 17:40:33.738238 kubelet[2894]: I0317 17:40:33.738123 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211" Mar 17 17:40:33.739031 containerd[1595]: time="2025-03-17T17:40:33.738988769Z" level=info msg="StopPodSandbox for \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\"" Mar 17 17:40:33.739738 containerd[1595]: time="2025-03-17T17:40:33.739194945Z" level=info msg="Ensure that sandbox 3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211 in task-service has been cleanup successfully" Mar 17 17:40:33.739738 containerd[1595]: time="2025-03-17T17:40:33.739423984Z" level=info msg="TearDown network for sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\" successfully" Mar 17 17:40:33.739738 containerd[1595]: time="2025-03-17T17:40:33.739436919Z" level=info msg="StopPodSandbox for \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\" returns successfully" Mar 17 17:40:33.739919 containerd[1595]: time="2025-03-17T17:40:33.739884228Z" level=info msg="StopPodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\"" Mar 17 17:40:33.740024 containerd[1595]: time="2025-03-17T17:40:33.740004629Z" level=info msg="TearDown network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" successfully" Mar 17 17:40:33.740024 containerd[1595]: time="2025-03-17T17:40:33.740018806Z" level=info msg="StopPodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" returns successfully" Mar 17 17:40:33.741118 containerd[1595]: time="2025-03-17T17:40:33.740472377Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\"" Mar 17 17:40:33.741118 containerd[1595]: time="2025-03-17T17:40:33.740755360Z" level=info msg="TearDown network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" successfully" Mar 17 17:40:33.741118 containerd[1595]: time="2025-03-17T17:40:33.740768425Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" returns successfully" Mar 17 17:40:33.741118 containerd[1595]: time="2025-03-17T17:40:33.740934023Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\"" Mar 17 17:40:33.741118 containerd[1595]: time="2025-03-17T17:40:33.741008676Z" level=info msg="TearDown network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" successfully" Mar 17 17:40:33.741118 containerd[1595]: time="2025-03-17T17:40:33.741018225Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" returns successfully" Mar 17 17:40:33.741286 kubelet[2894]: I0317 17:40:33.741178 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6" Mar 17 17:40:33.741599 containerd[1595]: time="2025-03-17T17:40:33.741569803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:4,}" Mar 17 17:40:33.742060 containerd[1595]: time="2025-03-17T17:40:33.742006903Z" level=info msg="StopPodSandbox for \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\"" Mar 17 17:40:33.742414 containerd[1595]: time="2025-03-17T17:40:33.742341115Z" level=info msg="Ensure that sandbox 4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6 in task-service has been cleanup successfully" Mar 17 17:40:33.742595 containerd[1595]: time="2025-03-17T17:40:33.742570885Z" level=info msg="TearDown network for sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\" successfully" Mar 17 17:40:33.742595 containerd[1595]: time="2025-03-17T17:40:33.742590403Z" level=info msg="StopPodSandbox for \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\" returns successfully" Mar 17 17:40:33.743313 containerd[1595]: time="2025-03-17T17:40:33.743114570Z" level=info msg="StopPodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\"" Mar 17 17:40:33.743313 containerd[1595]: time="2025-03-17T17:40:33.743210894Z" level=info msg="TearDown network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" successfully" Mar 17 17:40:33.743313 containerd[1595]: time="2025-03-17T17:40:33.743250400Z" level=info msg="StopPodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" returns successfully" Mar 17 17:40:33.744216 containerd[1595]: time="2025-03-17T17:40:33.744189433Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\"" Mar 17 17:40:33.744216 containerd[1595]: time="2025-03-17T17:40:33.744301448Z" level=info msg="TearDown network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" successfully" Mar 17 17:40:33.744382 containerd[1595]: time="2025-03-17T17:40:33.744319732Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" returns successfully" Mar 17 17:40:33.745327 kubelet[2894]: I0317 17:40:33.745177 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845" Mar 17 17:40:33.746498 containerd[1595]: time="2025-03-17T17:40:33.745588779Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\"" Mar 17 17:40:33.746498 containerd[1595]: time="2025-03-17T17:40:33.745706545Z" level=info msg="TearDown network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" successfully" Mar 17 17:40:33.746498 containerd[1595]: time="2025-03-17T17:40:33.745721964Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" returns successfully" Mar 17 17:40:33.746610 containerd[1595]: time="2025-03-17T17:40:33.746576265Z" level=info msg="StopPodSandbox for \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\"" Mar 17 17:40:33.746844 containerd[1595]: time="2025-03-17T17:40:33.746750439Z" level=info msg="Ensure that sandbox f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845 in task-service has been cleanup successfully" Mar 17 17:40:33.746943 kubelet[2894]: E0317 17:40:33.746819 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:33.747120 containerd[1595]: time="2025-03-17T17:40:33.747098006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:4,}" Mar 17 17:40:33.755453 containerd[1595]: time="2025-03-17T17:40:33.755411355Z" level=info msg="TearDown network for sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\" successfully" Mar 17 17:40:33.758350 kubelet[2894]: I0317 17:40:33.756408 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9" Mar 17 17:40:33.758418 containerd[1595]: time="2025-03-17T17:40:33.756447635Z" level=info msg="StopPodSandbox for \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\" returns successfully" Mar 17 17:40:33.767242 containerd[1595]: time="2025-03-17T17:40:33.764386395Z" level=info msg="StopPodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\"" Mar 17 17:40:33.767242 containerd[1595]: time="2025-03-17T17:40:33.764594395Z" level=info msg="Ensure that sandbox d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9 in task-service has been cleanup successfully" Mar 17 17:40:33.770463 containerd[1595]: time="2025-03-17T17:40:33.770339704Z" level=info msg="TearDown network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" successfully" Mar 17 17:40:33.773240 containerd[1595]: time="2025-03-17T17:40:33.770541862Z" level=info msg="StopPodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" returns successfully" Mar 17 17:40:33.773681 containerd[1595]: time="2025-03-17T17:40:33.770491124Z" level=info msg="StopPodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\"" Mar 17 17:40:33.773851 containerd[1595]: time="2025-03-17T17:40:33.773812480Z" level=info msg="TearDown network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" successfully" Mar 17 17:40:33.774036 containerd[1595]: time="2025-03-17T17:40:33.773979942Z" level=info msg="StopPodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" returns successfully" Mar 17 17:40:33.777503 containerd[1595]: time="2025-03-17T17:40:33.777486513Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\"" Mar 17 17:40:33.777728 containerd[1595]: time="2025-03-17T17:40:33.777511541Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\"" Mar 17 17:40:33.777728 containerd[1595]: time="2025-03-17T17:40:33.777659976Z" level=info msg="TearDown network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" successfully" Mar 17 17:40:33.777728 containerd[1595]: time="2025-03-17T17:40:33.777672029Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" returns successfully" Mar 17 17:40:33.777813 containerd[1595]: time="2025-03-17T17:40:33.777740180Z" level=info msg="TearDown network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" successfully" Mar 17 17:40:33.777813 containerd[1595]: time="2025-03-17T17:40:33.777753275Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" returns successfully" Mar 17 17:40:33.787611 containerd[1595]: time="2025-03-17T17:40:33.786713356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:3,}" Mar 17 17:40:33.787611 containerd[1595]: time="2025-03-17T17:40:33.786964768Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\"" Mar 17 17:40:33.787611 containerd[1595]: time="2025-03-17T17:40:33.787046696Z" level=info msg="TearDown network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" successfully" Mar 17 17:40:33.787611 containerd[1595]: time="2025-03-17T17:40:33.787067285Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" returns successfully" Mar 17 17:40:33.788434 containerd[1595]: time="2025-03-17T17:40:33.788394994Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" Mar 17 17:40:33.788607 containerd[1595]: time="2025-03-17T17:40:33.788525104Z" level=info msg="TearDown network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" successfully" Mar 17 17:40:33.788607 containerd[1595]: time="2025-03-17T17:40:33.788540794Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" returns successfully" Mar 17 17:40:33.789630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845-shm.mount: Deactivated successfully. Mar 17 17:40:33.789828 systemd[1]: run-netns-cni\x2db01526ad\x2d8d6e\x2d643e\x2d4779\x2dd03ffd67b211.mount: Deactivated successfully. Mar 17 17:40:33.789960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211-shm.mount: Deactivated successfully. Mar 17 17:40:33.790539 kubelet[2894]: I0317 17:40:33.790519 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe" Mar 17 17:40:33.792257 containerd[1595]: time="2025-03-17T17:40:33.790955880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:40:33.792693 containerd[1595]: time="2025-03-17T17:40:33.792670392Z" level=info msg="StopPodSandbox for \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\"" Mar 17 17:40:33.794514 containerd[1595]: time="2025-03-17T17:40:33.794483532Z" level=info msg="Ensure that sandbox 8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe in task-service has been cleanup successfully" Mar 17 17:40:33.799638 containerd[1595]: time="2025-03-17T17:40:33.798698594Z" level=info msg="TearDown network for sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\" successfully" Mar 17 17:40:33.799125 systemd[1]: run-netns-cni\x2dc4402323\x2d64cc\x2d0357\x2d8e36\x2d4e62169fe0dc.mount: Deactivated successfully. Mar 17 17:40:33.800109 kubelet[2894]: I0317 17:40:33.800075 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781" Mar 17 17:40:33.800627 containerd[1595]: time="2025-03-17T17:40:33.800606366Z" level=info msg="StopPodSandbox for \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\" returns successfully" Mar 17 17:40:33.801194 containerd[1595]: time="2025-03-17T17:40:33.801175850Z" level=info msg="StopPodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\"" Mar 17 17:40:33.801292 containerd[1595]: time="2025-03-17T17:40:33.801277444Z" level=info msg="TearDown network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" successfully" Mar 17 17:40:33.801329 containerd[1595]: time="2025-03-17T17:40:33.801290670Z" level=info msg="StopPodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" returns successfully" Mar 17 17:40:33.801359 containerd[1595]: time="2025-03-17T17:40:33.801331218Z" level=info msg="StopPodSandbox for \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\"" Mar 17 17:40:33.801625 containerd[1595]: time="2025-03-17T17:40:33.801591126Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\"" Mar 17 17:40:33.801735 containerd[1595]: time="2025-03-17T17:40:33.801715806Z" level=info msg="TearDown network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" successfully" Mar 17 17:40:33.801760 containerd[1595]: time="2025-03-17T17:40:33.801734592Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" returns successfully" Mar 17 17:40:33.802098 containerd[1595]: time="2025-03-17T17:40:33.802071509Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\"" Mar 17 17:40:33.802196 containerd[1595]: time="2025-03-17T17:40:33.802180318Z" level=info msg="TearDown network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" successfully" Mar 17 17:40:33.802235 containerd[1595]: time="2025-03-17T17:40:33.802195818Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" returns successfully" Mar 17 17:40:33.802800 containerd[1595]: time="2025-03-17T17:40:33.802777274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:40:33.803586 containerd[1595]: time="2025-03-17T17:40:33.803350364Z" level=info msg="Ensure that sandbox 1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781 in task-service has been cleanup successfully" Mar 17 17:40:33.803752 containerd[1595]: time="2025-03-17T17:40:33.803735824Z" level=info msg="TearDown network for sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\" successfully" Mar 17 17:40:33.803995 containerd[1595]: time="2025-03-17T17:40:33.803970384Z" level=info msg="StopPodSandbox for \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\" returns successfully" Mar 17 17:40:33.808779 systemd[1]: run-netns-cni\x2df36a5cc5\x2d80e9\x2d10e3\x2daa4c\x2d7d4c84ae42cd.mount: Deactivated successfully. Mar 17 17:40:33.810388 containerd[1595]: time="2025-03-17T17:40:33.810188412Z" level=info msg="StopPodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\"" Mar 17 17:40:33.810388 containerd[1595]: time="2025-03-17T17:40:33.810312289Z" level=info msg="TearDown network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" successfully" Mar 17 17:40:33.810388 containerd[1595]: time="2025-03-17T17:40:33.810324883Z" level=info msg="StopPodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" returns successfully" Mar 17 17:40:33.810795 containerd[1595]: time="2025-03-17T17:40:33.810774306Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\"" Mar 17 17:40:33.810907 containerd[1595]: time="2025-03-17T17:40:33.810892783Z" level=info msg="TearDown network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" successfully" Mar 17 17:40:33.810932 containerd[1595]: time="2025-03-17T17:40:33.810906891Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" returns successfully" Mar 17 17:40:33.812284 containerd[1595]: time="2025-03-17T17:40:33.811124217Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\"" Mar 17 17:40:33.812284 containerd[1595]: time="2025-03-17T17:40:33.811201175Z" level=info msg="TearDown network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" successfully" Mar 17 17:40:33.812284 containerd[1595]: time="2025-03-17T17:40:33.811210162Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" returns successfully" Mar 17 17:40:33.812446 kubelet[2894]: E0317 17:40:33.811556 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:33.812795 containerd[1595]: time="2025-03-17T17:40:33.812767382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:4,}" Mar 17 17:40:33.905019 containerd[1595]: time="2025-03-17T17:40:33.904939445Z" level=error msg="Failed to destroy network for sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.905502 containerd[1595]: time="2025-03-17T17:40:33.905358740Z" level=error msg="encountered an error cleaning up failed sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.905502 containerd[1595]: time="2025-03-17T17:40:33.905413314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.906129 kubelet[2894]: E0317 17:40:33.905762 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.906129 kubelet[2894]: E0317 17:40:33.905821 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:33.906129 kubelet[2894]: E0317 17:40:33.905848 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:33.906275 kubelet[2894]: E0317 17:40:33.905887 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j5l2k" podUID="e68c1525-3bc8-4435-a253-fa308a8e7604" Mar 17 17:40:33.927246 containerd[1595]: time="2025-03-17T17:40:33.925782366Z" level=error msg="Failed to destroy network for sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.927246 containerd[1595]: time="2025-03-17T17:40:33.926315470Z" level=error msg="encountered an error cleaning up failed sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.927246 containerd[1595]: time="2025-03-17T17:40:33.926365847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.928096 kubelet[2894]: E0317 17:40:33.927597 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.928096 kubelet[2894]: E0317 17:40:33.927660 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:33.928096 kubelet[2894]: E0317 17:40:33.927682 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:33.928308 kubelet[2894]: E0317 17:40:33.927726 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" podUID="43ddfd49-802e-4437-b6f0-ed427cdd6be8" Mar 17 17:40:33.973691 containerd[1595]: time="2025-03-17T17:40:33.973639839Z" level=error msg="Failed to destroy network for sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.974525 containerd[1595]: time="2025-03-17T17:40:33.974500130Z" level=error msg="encountered an error cleaning up failed sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.975310 containerd[1595]: time="2025-03-17T17:40:33.974555397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.975405 kubelet[2894]: E0317 17:40:33.974819 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.975405 kubelet[2894]: E0317 17:40:33.974894 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:33.975405 kubelet[2894]: E0317 17:40:33.974918 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:33.975490 kubelet[2894]: E0317 17:40:33.974987 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" podUID="05bc58a2-8b10-4350-b41e-7b091d9a3a8c" Mar 17 17:40:33.975532 containerd[1595]: time="2025-03-17T17:40:33.975507895Z" level=error msg="Failed to destroy network for sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.975859 containerd[1595]: time="2025-03-17T17:40:33.975836716Z" level=error msg="encountered an error cleaning up failed sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.975893 containerd[1595]: time="2025-03-17T17:40:33.975873918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.975997 kubelet[2894]: E0317 17:40:33.975977 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.976027 kubelet[2894]: E0317 17:40:33.976006 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:33.976027 kubelet[2894]: E0317 17:40:33.976020 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:33.976093 kubelet[2894]: E0317 17:40:33.976064 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-24zxx_calico-system(e6243402-8f9c-4b35-b2c7-317fe823ae81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-24zxx_calico-system(e6243402-8f9c-4b35-b2c7-317fe823ae81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:33.978538 containerd[1595]: time="2025-03-17T17:40:33.978404285Z" level=error msg="Failed to destroy network for sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.979029 containerd[1595]: time="2025-03-17T17:40:33.978982205Z" level=error msg="encountered an error cleaning up failed sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.979180 containerd[1595]: time="2025-03-17T17:40:33.979034264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.979416 kubelet[2894]: E0317 17:40:33.979367 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:33.979472 kubelet[2894]: E0317 17:40:33.979444 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:33.979472 kubelet[2894]: E0317 17:40:33.979463 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:33.979563 kubelet[2894]: E0317 17:40:33.979538 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" podUID="c6ebfa09-1d89-41a1-975e-0d041b544630" Mar 17 17:40:34.000612 containerd[1595]: time="2025-03-17T17:40:34.000487408Z" level=error msg="Failed to destroy network for sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:34.000931 containerd[1595]: time="2025-03-17T17:40:34.000900300Z" level=error msg="encountered an error cleaning up failed sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:34.001127 containerd[1595]: time="2025-03-17T17:40:34.001092158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:34.001392 kubelet[2894]: E0317 17:40:34.001356 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:34.001457 kubelet[2894]: E0317 17:40:34.001441 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:34.001490 kubelet[2894]: E0317 17:40:34.001460 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:34.001529 kubelet[2894]: E0317 17:40:34.001501 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5xpt7" podUID="1cbd3c90-0c66-408d-9e5d-1382eccfbde6" Mar 17 17:40:34.278616 systemd[1]: Started sshd@9-10.0.0.27:22-10.0.0.1:33106.service - OpenSSH per-connection server daemon (10.0.0.1:33106). Mar 17 17:40:34.530324 sshd[4762]: Accepted publickey for core from 10.0.0.1 port 33106 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:34.531987 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:34.539973 systemd-logind[1578]: New session 10 of user core. Mar 17 17:40:34.550583 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:40:34.725081 sshd[4765]: Connection closed by 10.0.0.1 port 33106 Mar 17 17:40:34.725934 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:34.729722 systemd[1]: sshd@9-10.0.0.27:22-10.0.0.1:33106.service: Deactivated successfully. Mar 17 17:40:34.735399 systemd-logind[1578]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:40:34.736209 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:40:34.737749 systemd-logind[1578]: Removed session 10. Mar 17 17:40:34.790032 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22-shm.mount: Deactivated successfully. Mar 17 17:40:34.790340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4-shm.mount: Deactivated successfully. Mar 17 17:40:34.790545 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f-shm.mount: Deactivated successfully. Mar 17 17:40:34.805351 kubelet[2894]: I0317 17:40:34.805309 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4" Mar 17 17:40:34.806211 containerd[1595]: time="2025-03-17T17:40:34.806173873Z" level=info msg="StopPodSandbox for \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\"" Mar 17 17:40:34.806737 containerd[1595]: time="2025-03-17T17:40:34.806400518Z" level=info msg="Ensure that sandbox 71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4 in task-service has been cleanup successfully" Mar 17 17:40:34.806737 containerd[1595]: time="2025-03-17T17:40:34.806610040Z" level=info msg="TearDown network for sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\" successfully" Mar 17 17:40:34.806737 containerd[1595]: time="2025-03-17T17:40:34.806621362Z" level=info msg="StopPodSandbox for \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\" returns successfully" Mar 17 17:40:34.808129 containerd[1595]: time="2025-03-17T17:40:34.807926997Z" level=info msg="StopPodSandbox for \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\"" Mar 17 17:40:34.808486 containerd[1595]: time="2025-03-17T17:40:34.808417488Z" level=info msg="TearDown network for sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\" successfully" Mar 17 17:40:34.808486 containerd[1595]: time="2025-03-17T17:40:34.808438969Z" level=info msg="StopPodSandbox for \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\" returns successfully" Mar 17 17:40:34.810539 systemd[1]: run-netns-cni\x2d4ee6ac10\x2dff8e\x2d2a95\x2dd714\x2d2199f17d1ad0.mount: Deactivated successfully. Mar 17 17:40:34.810742 containerd[1595]: time="2025-03-17T17:40:34.810677063Z" level=info msg="StopPodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\"" Mar 17 17:40:34.810872 containerd[1595]: time="2025-03-17T17:40:34.810840286Z" level=info msg="TearDown network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" successfully" Mar 17 17:40:34.810872 containerd[1595]: time="2025-03-17T17:40:34.810865274Z" level=info msg="StopPodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" returns successfully" Mar 17 17:40:34.811732 containerd[1595]: time="2025-03-17T17:40:34.811577661Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\"" Mar 17 17:40:34.812423 containerd[1595]: time="2025-03-17T17:40:34.812352117Z" level=info msg="TearDown network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" successfully" Mar 17 17:40:34.812423 containerd[1595]: time="2025-03-17T17:40:34.812420047Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" returns successfully" Mar 17 17:40:34.813414 containerd[1595]: time="2025-03-17T17:40:34.813348157Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\"" Mar 17 17:40:34.813527 containerd[1595]: time="2025-03-17T17:40:34.813486312Z" level=info msg="TearDown network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" successfully" Mar 17 17:40:34.813527 containerd[1595]: time="2025-03-17T17:40:34.813510248Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" returns successfully" Mar 17 17:40:34.813731 kubelet[2894]: I0317 17:40:34.813594 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22" Mar 17 17:40:34.814090 kubelet[2894]: E0317 17:40:34.814050 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:34.814760 containerd[1595]: time="2025-03-17T17:40:34.814355961Z" level=info msg="StopPodSandbox for \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\"" Mar 17 17:40:34.814760 containerd[1595]: time="2025-03-17T17:40:34.814596463Z" level=info msg="Ensure that sandbox 2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22 in task-service has been cleanup successfully" Mar 17 17:40:34.814972 containerd[1595]: time="2025-03-17T17:40:34.814930663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:5,}" Mar 17 17:40:34.817264 containerd[1595]: time="2025-03-17T17:40:34.816897497Z" level=info msg="TearDown network for sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\" successfully" Mar 17 17:40:34.817264 containerd[1595]: time="2025-03-17T17:40:34.816918397Z" level=info msg="StopPodSandbox for \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\" returns successfully" Mar 17 17:40:34.817446 kubelet[2894]: I0317 17:40:34.817421 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340" Mar 17 17:40:34.817920 containerd[1595]: time="2025-03-17T17:40:34.817881084Z" level=info msg="StopPodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\"" Mar 17 17:40:34.818029 containerd[1595]: time="2025-03-17T17:40:34.817998489Z" level=info msg="TearDown network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" successfully" Mar 17 17:40:34.818029 containerd[1595]: time="2025-03-17T17:40:34.818021634Z" level=info msg="StopPodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" returns successfully" Mar 17 17:40:34.818335 containerd[1595]: time="2025-03-17T17:40:34.818298495Z" level=info msg="StopPodSandbox for \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\"" Mar 17 17:40:34.818559 containerd[1595]: time="2025-03-17T17:40:34.818525310Z" level=info msg="Ensure that sandbox 26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340 in task-service has been cleanup successfully" Mar 17 17:40:34.818970 containerd[1595]: time="2025-03-17T17:40:34.818942230Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\"" Mar 17 17:40:34.819091 containerd[1595]: time="2025-03-17T17:40:34.819058643Z" level=info msg="TearDown network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" successfully" Mar 17 17:40:34.819091 containerd[1595]: time="2025-03-17T17:40:34.819088029Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" returns successfully" Mar 17 17:40:34.819408 systemd[1]: run-netns-cni\x2d843dd37e\x2df5d1\x2d3cba\x2da6de\x2d8d0c043aec54.mount: Deactivated successfully. Mar 17 17:40:34.819531 containerd[1595]: time="2025-03-17T17:40:34.819446197Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\"" Mar 17 17:40:34.819573 containerd[1595]: time="2025-03-17T17:40:34.819540357Z" level=info msg="TearDown network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" successfully" Mar 17 17:40:34.819573 containerd[1595]: time="2025-03-17T17:40:34.819554615Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" returns successfully" Mar 17 17:40:34.823118 containerd[1595]: time="2025-03-17T17:40:34.820444683Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" Mar 17 17:40:34.823118 containerd[1595]: time="2025-03-17T17:40:34.820557399Z" level=info msg="TearDown network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" successfully" Mar 17 17:40:34.823118 containerd[1595]: time="2025-03-17T17:40:34.820571706Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" returns successfully" Mar 17 17:40:34.823118 containerd[1595]: time="2025-03-17T17:40:34.821349418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:40:34.823118 containerd[1595]: time="2025-03-17T17:40:34.822056505Z" level=info msg="TearDown network for sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\" successfully" Mar 17 17:40:34.823118 containerd[1595]: time="2025-03-17T17:40:34.822094338Z" level=info msg="StopPodSandbox for \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\" returns successfully" Mar 17 17:40:34.823817 containerd[1595]: time="2025-03-17T17:40:34.823624282Z" level=info msg="StopPodSandbox for \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\"" Mar 17 17:40:34.823817 containerd[1595]: time="2025-03-17T17:40:34.823714485Z" level=info msg="TearDown network for sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\" successfully" Mar 17 17:40:34.823817 containerd[1595]: time="2025-03-17T17:40:34.823723994Z" level=info msg="StopPodSandbox for \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\" returns successfully" Mar 17 17:40:34.824094 kubelet[2894]: I0317 17:40:34.824050 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f" Mar 17 17:40:34.824211 containerd[1595]: time="2025-03-17T17:40:34.824188385Z" level=info msg="StopPodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\"" Mar 17 17:40:34.824302 containerd[1595]: time="2025-03-17T17:40:34.824282616Z" level=info msg="TearDown network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" successfully" Mar 17 17:40:34.824302 containerd[1595]: time="2025-03-17T17:40:34.824296693Z" level=info msg="StopPodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" returns successfully" Mar 17 17:40:34.825721 containerd[1595]: time="2025-03-17T17:40:34.825677301Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\"" Mar 17 17:40:34.825838 containerd[1595]: time="2025-03-17T17:40:34.825810076Z" level=info msg="TearDown network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" successfully" Mar 17 17:40:34.825838 containerd[1595]: time="2025-03-17T17:40:34.825828180Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" returns successfully" Mar 17 17:40:34.826148 containerd[1595]: time="2025-03-17T17:40:34.826038505Z" level=info msg="StopPodSandbox for \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\"" Mar 17 17:40:34.826585 systemd[1]: run-netns-cni\x2d9ca96a59\x2d1550\x2d4e22\x2d1af2\x2d8d835f61e27a.mount: Deactivated successfully. Mar 17 17:40:34.826725 containerd[1595]: time="2025-03-17T17:40:34.826687900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:4,}" Mar 17 17:40:34.827722 containerd[1595]: time="2025-03-17T17:40:34.827588388Z" level=info msg="Ensure that sandbox 2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f in task-service has been cleanup successfully" Mar 17 17:40:34.828455 containerd[1595]: time="2025-03-17T17:40:34.828344438Z" level=info msg="TearDown network for sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\" successfully" Mar 17 17:40:34.828577 containerd[1595]: time="2025-03-17T17:40:34.828546897Z" level=info msg="StopPodSandbox for \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\" returns successfully" Mar 17 17:40:34.829267 kubelet[2894]: I0317 17:40:34.829008 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a" Mar 17 17:40:34.830060 containerd[1595]: time="2025-03-17T17:40:34.829741188Z" level=info msg="StopPodSandbox for \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\"" Mar 17 17:40:34.830060 containerd[1595]: time="2025-03-17T17:40:34.829849757Z" level=info msg="TearDown network for sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\" successfully" Mar 17 17:40:34.830060 containerd[1595]: time="2025-03-17T17:40:34.829861749Z" level=info msg="StopPodSandbox for \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\" returns successfully" Mar 17 17:40:34.830436 containerd[1595]: time="2025-03-17T17:40:34.830416745Z" level=info msg="StopPodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\"" Mar 17 17:40:34.830562 containerd[1595]: time="2025-03-17T17:40:34.830548206Z" level=info msg="TearDown network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" successfully" Mar 17 17:40:34.830633 containerd[1595]: time="2025-03-17T17:40:34.830615545Z" level=info msg="StopPodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" returns successfully" Mar 17 17:40:34.830816 containerd[1595]: time="2025-03-17T17:40:34.830712030Z" level=info msg="StopPodSandbox for \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\"" Mar 17 17:40:34.831833 systemd[1]: run-netns-cni\x2ddb8bf50a\x2d078e\x2dad95\x2d650f\x2dec5ca99395d3.mount: Deactivated successfully. Mar 17 17:40:34.831967 containerd[1595]: time="2025-03-17T17:40:34.831913015Z" level=info msg="Ensure that sandbox c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a in task-service has been cleanup successfully" Mar 17 17:40:34.832983 containerd[1595]: time="2025-03-17T17:40:34.832137676Z" level=info msg="TearDown network for sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\" successfully" Mar 17 17:40:34.832983 containerd[1595]: time="2025-03-17T17:40:34.832158807Z" level=info msg="StopPodSandbox for \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\" returns successfully" Mar 17 17:40:34.832983 containerd[1595]: time="2025-03-17T17:40:34.831171652Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\"" Mar 17 17:40:34.833139 containerd[1595]: time="2025-03-17T17:40:34.833057340Z" level=info msg="StopPodSandbox for \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\"" Mar 17 17:40:34.833266 containerd[1595]: time="2025-03-17T17:40:34.833167331Z" level=info msg="TearDown network for sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\" successfully" Mar 17 17:40:34.833266 containerd[1595]: time="2025-03-17T17:40:34.833189784Z" level=info msg="StopPodSandbox for \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\" returns successfully" Mar 17 17:40:34.833502 containerd[1595]: time="2025-03-17T17:40:34.833375661Z" level=info msg="TearDown network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" successfully" Mar 17 17:40:34.833502 containerd[1595]: time="2025-03-17T17:40:34.833446958Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" returns successfully" Mar 17 17:40:34.835452 containerd[1595]: time="2025-03-17T17:40:34.835416907Z" level=info msg="StopPodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\"" Mar 17 17:40:34.835551 containerd[1595]: time="2025-03-17T17:40:34.835528140Z" level=info msg="TearDown network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" successfully" Mar 17 17:40:34.835551 containerd[1595]: time="2025-03-17T17:40:34.835546515Z" level=info msg="StopPodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" returns successfully" Mar 17 17:40:34.836249 kubelet[2894]: I0317 17:40:34.835980 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f" Mar 17 17:40:34.837542 containerd[1595]: time="2025-03-17T17:40:34.837124854Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\"" Mar 17 17:40:34.837542 containerd[1595]: time="2025-03-17T17:40:34.837280923Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\"" Mar 17 17:40:34.837687 containerd[1595]: time="2025-03-17T17:40:34.837632828Z" level=info msg="TearDown network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" successfully" Mar 17 17:40:34.837687 containerd[1595]: time="2025-03-17T17:40:34.837649861Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" returns successfully" Mar 17 17:40:34.837875 containerd[1595]: time="2025-03-17T17:40:34.837848502Z" level=info msg="TearDown network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" successfully" Mar 17 17:40:34.837875 containerd[1595]: time="2025-03-17T17:40:34.837866977Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" returns successfully" Mar 17 17:40:34.838084 containerd[1595]: time="2025-03-17T17:40:34.838051913Z" level=info msg="StopPodSandbox for \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\"" Mar 17 17:40:34.838813 containerd[1595]: time="2025-03-17T17:40:34.838782754Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\"" Mar 17 17:40:34.839193 containerd[1595]: time="2025-03-17T17:40:34.839141984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:40:34.839498 containerd[1595]: time="2025-03-17T17:40:34.839266243Z" level=info msg="TearDown network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" successfully" Mar 17 17:40:34.839498 containerd[1595]: time="2025-03-17T17:40:34.839288686Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" returns successfully" Mar 17 17:40:34.839832 kubelet[2894]: E0317 17:40:34.839514 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:34.839959 containerd[1595]: time="2025-03-17T17:40:34.839926929Z" level=info msg="Ensure that sandbox ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f in task-service has been cleanup successfully" Mar 17 17:40:34.840276 containerd[1595]: time="2025-03-17T17:40:34.840209101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:5,}" Mar 17 17:40:34.840457 containerd[1595]: time="2025-03-17T17:40:34.840422230Z" level=info msg="TearDown network for sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\" successfully" Mar 17 17:40:34.840700 containerd[1595]: time="2025-03-17T17:40:34.840671538Z" level=info msg="StopPodSandbox for \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\" returns successfully" Mar 17 17:40:34.842162 containerd[1595]: time="2025-03-17T17:40:34.841657019Z" level=info msg="StopPodSandbox for \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\"" Mar 17 17:40:34.842162 containerd[1595]: time="2025-03-17T17:40:34.841835150Z" level=info msg="TearDown network for sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\" successfully" Mar 17 17:40:34.842162 containerd[1595]: time="2025-03-17T17:40:34.841851482Z" level=info msg="StopPodSandbox for \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\" returns successfully" Mar 17 17:40:34.842492 containerd[1595]: time="2025-03-17T17:40:34.842397319Z" level=info msg="StopPodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\"" Mar 17 17:40:34.842549 containerd[1595]: time="2025-03-17T17:40:34.842530555Z" level=info msg="TearDown network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" successfully" Mar 17 17:40:34.842549 containerd[1595]: time="2025-03-17T17:40:34.842541566Z" level=info msg="StopPodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" returns successfully" Mar 17 17:40:34.843164 containerd[1595]: time="2025-03-17T17:40:34.843092112Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\"" Mar 17 17:40:34.843217 containerd[1595]: time="2025-03-17T17:40:34.843192585Z" level=info msg="TearDown network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" successfully" Mar 17 17:40:34.843217 containerd[1595]: time="2025-03-17T17:40:34.843202985Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" returns successfully" Mar 17 17:40:34.843487 containerd[1595]: time="2025-03-17T17:40:34.843455108Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\"" Mar 17 17:40:34.843622 containerd[1595]: time="2025-03-17T17:40:34.843538839Z" level=info msg="TearDown network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" successfully" Mar 17 17:40:34.843622 containerd[1595]: time="2025-03-17T17:40:34.843549129Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" returns successfully" Mar 17 17:40:34.844575 containerd[1595]: time="2025-03-17T17:40:34.844303997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:5,}" Mar 17 17:40:35.786758 systemd[1]: run-netns-cni\x2dfd6d3fd5\x2d4283\x2d7f04\x2d67fa\x2d6503b36d3d3a.mount: Deactivated successfully. Mar 17 17:40:35.786961 systemd[1]: run-netns-cni\x2dd07927d7\x2d4873\x2dd8b5\x2d335a\x2da765269a9941.mount: Deactivated successfully. Mar 17 17:40:35.787211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737705163.mount: Deactivated successfully. Mar 17 17:40:37.394678 containerd[1595]: time="2025-03-17T17:40:37.394583983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:37.583489 containerd[1595]: time="2025-03-17T17:40:37.583287454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 17:40:37.717844 containerd[1595]: time="2025-03-17T17:40:37.717758890Z" level=error msg="Failed to destroy network for sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.718317 containerd[1595]: time="2025-03-17T17:40:37.718274958Z" level=error msg="encountered an error cleaning up failed sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.718381 containerd[1595]: time="2025-03-17T17:40:37.718353389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.718691 kubelet[2894]: E0317 17:40:37.718622 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.719350 kubelet[2894]: E0317 17:40:37.718708 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:37.719350 kubelet[2894]: E0317 17:40:37.718733 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:37.719350 kubelet[2894]: E0317 17:40:37.718788 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" podUID="c6ebfa09-1d89-41a1-975e-0d041b544630" Mar 17 17:40:37.755588 containerd[1595]: time="2025-03-17T17:40:37.755506054Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:37.813340 containerd[1595]: time="2025-03-17T17:40:37.813275546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:37.814611 containerd[1595]: time="2025-03-17T17:40:37.814530029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 7.114875041s" Mar 17 17:40:37.814690 containerd[1595]: time="2025-03-17T17:40:37.814612086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 17:40:37.828966 containerd[1595]: time="2025-03-17T17:40:37.828906695Z" level=info msg="CreateContainer within sandbox \"b2b790914055531bd27801599a74fcacd9daaf810409971fc4a1dcf80c9de97c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:40:37.844862 kubelet[2894]: I0317 17:40:37.844694 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298" Mar 17 17:40:37.846512 containerd[1595]: time="2025-03-17T17:40:37.846469081Z" level=info msg="StopPodSandbox for \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\"" Mar 17 17:40:37.846730 containerd[1595]: time="2025-03-17T17:40:37.846703369Z" level=info msg="Ensure that sandbox 4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298 in task-service has been cleanup successfully" Mar 17 17:40:37.847541 containerd[1595]: time="2025-03-17T17:40:37.847512429Z" level=info msg="TearDown network for sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\" successfully" Mar 17 17:40:37.847541 containerd[1595]: time="2025-03-17T17:40:37.847538569Z" level=info msg="StopPodSandbox for \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\" returns successfully" Mar 17 17:40:37.848818 containerd[1595]: time="2025-03-17T17:40:37.848786379Z" level=info msg="StopPodSandbox for \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\"" Mar 17 17:40:37.848920 containerd[1595]: time="2025-03-17T17:40:37.848898854Z" level=info msg="TearDown network for sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\" successfully" Mar 17 17:40:37.848920 containerd[1595]: time="2025-03-17T17:40:37.848916718Z" level=info msg="StopPodSandbox for \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\" returns successfully" Mar 17 17:40:37.849645 containerd[1595]: time="2025-03-17T17:40:37.849616247Z" level=info msg="StopPodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\"" Mar 17 17:40:37.849737 containerd[1595]: time="2025-03-17T17:40:37.849716651Z" level=info msg="TearDown network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" successfully" Mar 17 17:40:37.849737 containerd[1595]: time="2025-03-17T17:40:37.849734544Z" level=info msg="StopPodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" returns successfully" Mar 17 17:40:37.850267 containerd[1595]: time="2025-03-17T17:40:37.850004441Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\"" Mar 17 17:40:37.850267 containerd[1595]: time="2025-03-17T17:40:37.850096408Z" level=info msg="TearDown network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" successfully" Mar 17 17:40:37.850267 containerd[1595]: time="2025-03-17T17:40:37.850120955Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" returns successfully" Mar 17 17:40:37.850428 containerd[1595]: time="2025-03-17T17:40:37.850403155Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\"" Mar 17 17:40:37.850520 containerd[1595]: time="2025-03-17T17:40:37.850498007Z" level=info msg="TearDown network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" successfully" Mar 17 17:40:37.850520 containerd[1595]: time="2025-03-17T17:40:37.850517254Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" returns successfully" Mar 17 17:40:37.850854 containerd[1595]: time="2025-03-17T17:40:37.850827628Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" Mar 17 17:40:37.850944 containerd[1595]: time="2025-03-17T17:40:37.850921177Z" level=info msg="TearDown network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" successfully" Mar 17 17:40:37.850944 containerd[1595]: time="2025-03-17T17:40:37.850939422Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" returns successfully" Mar 17 17:40:37.851702 containerd[1595]: time="2025-03-17T17:40:37.851664361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:6,}" Mar 17 17:40:37.885137 containerd[1595]: time="2025-03-17T17:40:37.885058558Z" level=error msg="Failed to destroy network for sandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.885841 containerd[1595]: time="2025-03-17T17:40:37.885597220Z" level=error msg="encountered an error cleaning up failed sandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.885841 containerd[1595]: time="2025-03-17T17:40:37.885675921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.886096 kubelet[2894]: E0317 17:40:37.886038 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.886197 kubelet[2894]: E0317 17:40:37.886146 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:37.886197 kubelet[2894]: E0317 17:40:37.886176 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24zxx" Mar 17 17:40:37.886410 kubelet[2894]: E0317 17:40:37.886313 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-24zxx_calico-system(e6243402-8f9c-4b35-b2c7-317fe823ae81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-24zxx_calico-system(e6243402-8f9c-4b35-b2c7-317fe823ae81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-24zxx" podUID="e6243402-8f9c-4b35-b2c7-317fe823ae81" Mar 17 17:40:37.907645 containerd[1595]: time="2025-03-17T17:40:37.907569362Z" level=error msg="Failed to destroy network for sandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.908128 containerd[1595]: time="2025-03-17T17:40:37.908078707Z" level=error msg="encountered an error cleaning up failed sandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.908192 containerd[1595]: time="2025-03-17T17:40:37.908163009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.908514 kubelet[2894]: E0317 17:40:37.908456 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.908678 kubelet[2894]: E0317 17:40:37.908529 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:37.908678 kubelet[2894]: E0317 17:40:37.908551 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" Mar 17 17:40:37.908678 kubelet[2894]: E0317 17:40:37.908599 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-9lw4k_calico-apiserver(05bc58a2-8b10-4350-b41e-7b091d9a3a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" podUID="05bc58a2-8b10-4350-b41e-7b091d9a3a8c" Mar 17 17:40:37.917928 containerd[1595]: time="2025-03-17T17:40:37.917863328Z" level=error msg="Failed to destroy network for sandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.918382 containerd[1595]: time="2025-03-17T17:40:37.918349067Z" level=error msg="encountered an error cleaning up failed sandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.918424 containerd[1595]: time="2025-03-17T17:40:37.918409744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.918755 kubelet[2894]: E0317 17:40:37.918700 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.918817 kubelet[2894]: E0317 17:40:37.918789 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:37.918846 kubelet[2894]: E0317 17:40:37.918818 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" Mar 17 17:40:37.918919 kubelet[2894]: E0317 17:40:37.918884 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b6b58f89d-g52xg_calico-system(43ddfd49-802e-4437-b6f0-ed427cdd6be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" podUID="43ddfd49-802e-4437-b6f0-ed427cdd6be8" Mar 17 17:40:37.932737 containerd[1595]: time="2025-03-17T17:40:37.932677009Z" level=error msg="Failed to destroy network for sandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.933049 containerd[1595]: time="2025-03-17T17:40:37.932995048Z" level=error msg="Failed to destroy network for sandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.933298 containerd[1595]: time="2025-03-17T17:40:37.933265076Z" level=error msg="encountered an error cleaning up failed sandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.933373 containerd[1595]: time="2025-03-17T17:40:37.933343706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.933578 containerd[1595]: time="2025-03-17T17:40:37.933533650Z" level=error msg="encountered an error cleaning up failed sandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.933769 containerd[1595]: time="2025-03-17T17:40:37.933614465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.933823 kubelet[2894]: E0317 17:40:37.933671 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.933823 kubelet[2894]: E0317 17:40:37.933747 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:37.933823 kubelet[2894]: E0317 17:40:37.933769 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xpt7" Mar 17 17:40:37.933950 kubelet[2894]: E0317 17:40:37.933815 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5xpt7_kube-system(1cbd3c90-0c66-408d-9e5d-1382eccfbde6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5xpt7" podUID="1cbd3c90-0c66-408d-9e5d-1382eccfbde6" Mar 17 17:40:37.933950 kubelet[2894]: E0317 17:40:37.933900 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:37.934097 kubelet[2894]: E0317 17:40:37.933987 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:37.934097 kubelet[2894]: E0317 17:40:37.934017 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j5l2k" Mar 17 17:40:37.934097 kubelet[2894]: E0317 17:40:37.934071 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j5l2k_kube-system(e68c1525-3bc8-4435-a253-fa308a8e7604)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j5l2k" podUID="e68c1525-3bc8-4435-a253-fa308a8e7604" Mar 17 17:40:38.190642 containerd[1595]: time="2025-03-17T17:40:38.190572595Z" level=info msg="CreateContainer within sandbox \"b2b790914055531bd27801599a74fcacd9daaf810409971fc4a1dcf80c9de97c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"024e8933a2d8caa418b55bc5ff261d899ca91f0ddd5bf9bc59c977c7e89d4ea1\"" Mar 17 17:40:38.191318 containerd[1595]: time="2025-03-17T17:40:38.191286943Z" level=info msg="StartContainer for \"024e8933a2d8caa418b55bc5ff261d899ca91f0ddd5bf9bc59c977c7e89d4ea1\"" Mar 17 17:40:38.333650 containerd[1595]: time="2025-03-17T17:40:38.333570783Z" level=error msg="Failed to destroy network for sandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:38.334597 containerd[1595]: time="2025-03-17T17:40:38.334554977Z" level=error msg="encountered an error cleaning up failed sandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:38.334665 containerd[1595]: time="2025-03-17T17:40:38.334633979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:38.334956 kubelet[2894]: E0317 17:40:38.334907 2894 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:38.335094 kubelet[2894]: E0317 17:40:38.334990 2894 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:38.335094 kubelet[2894]: E0317 17:40:38.335017 2894 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" Mar 17 17:40:38.335156 kubelet[2894]: E0317 17:40:38.335080 2894 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-779d48f5d9-dsbpp_calico-apiserver(c6ebfa09-1d89-41a1-975e-0d041b544630)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" podUID="c6ebfa09-1d89-41a1-975e-0d041b544630" Mar 17 17:40:38.422747 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f-shm.mount: Deactivated successfully. Mar 17 17:40:38.423002 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb-shm.mount: Deactivated successfully. Mar 17 17:40:38.423186 systemd[1]: run-netns-cni\x2d8a7d0da7\x2d1522\x2de945\x2d9de4\x2d7902a14cb919.mount: Deactivated successfully. Mar 17 17:40:38.423434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298-shm.mount: Deactivated successfully. Mar 17 17:40:38.452152 containerd[1595]: time="2025-03-17T17:40:38.451987838Z" level=info msg="StartContainer for \"024e8933a2d8caa418b55bc5ff261d899ca91f0ddd5bf9bc59c977c7e89d4ea1\" returns successfully" Mar 17 17:40:38.473141 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:40:38.473301 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:40:38.849817 kubelet[2894]: I0317 17:40:38.849671 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5" Mar 17 17:40:38.851793 containerd[1595]: time="2025-03-17T17:40:38.850520535Z" level=info msg="StopPodSandbox for \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\"" Mar 17 17:40:38.852055 containerd[1595]: time="2025-03-17T17:40:38.852026026Z" level=info msg="Ensure that sandbox c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5 in task-service has been cleanup successfully" Mar 17 17:40:38.852548 containerd[1595]: time="2025-03-17T17:40:38.852512397Z" level=info msg="TearDown network for sandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\" successfully" Mar 17 17:40:38.852548 containerd[1595]: time="2025-03-17T17:40:38.852540763Z" level=info msg="StopPodSandbox for \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\" returns successfully" Mar 17 17:40:38.856571 containerd[1595]: time="2025-03-17T17:40:38.856503788Z" level=info msg="StopPodSandbox for \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\"" Mar 17 17:40:38.856694 containerd[1595]: time="2025-03-17T17:40:38.856657823Z" level=info msg="TearDown network for sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\" successfully" Mar 17 17:40:38.856694 containerd[1595]: time="2025-03-17T17:40:38.856673172Z" level=info msg="StopPodSandbox for \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\" returns successfully" Mar 17 17:40:38.857070 containerd[1595]: time="2025-03-17T17:40:38.857031548Z" level=info msg="StopPodSandbox for \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\"" Mar 17 17:40:38.857183 containerd[1595]: time="2025-03-17T17:40:38.857153482Z" level=info msg="TearDown network for sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\" successfully" Mar 17 17:40:38.857183 containerd[1595]: time="2025-03-17T17:40:38.857177398Z" level=info msg="StopPodSandbox for \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\" returns successfully" Mar 17 17:40:38.857561 systemd[1]: run-netns-cni\x2dfae9da30\x2d4ad6\x2d76d2\x2da77b\x2de48b15935757.mount: Deactivated successfully. Mar 17 17:40:38.857665 containerd[1595]: time="2025-03-17T17:40:38.857582323Z" level=info msg="StopPodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\"" Mar 17 17:40:38.857702 containerd[1595]: time="2025-03-17T17:40:38.857676893Z" level=info msg="TearDown network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" successfully" Mar 17 17:40:38.857702 containerd[1595]: time="2025-03-17T17:40:38.857691031Z" level=info msg="StopPodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" returns successfully" Mar 17 17:40:38.858329 containerd[1595]: time="2025-03-17T17:40:38.858291069Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\"" Mar 17 17:40:38.858494 containerd[1595]: time="2025-03-17T17:40:38.858389067Z" level=info msg="TearDown network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" successfully" Mar 17 17:40:38.858494 containerd[1595]: time="2025-03-17T17:40:38.858402833Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" returns successfully" Mar 17 17:40:38.858986 containerd[1595]: time="2025-03-17T17:40:38.858955812Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\"" Mar 17 17:40:38.859184 containerd[1595]: time="2025-03-17T17:40:38.859162448Z" level=info msg="TearDown network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" successfully" Mar 17 17:40:38.859273 containerd[1595]: time="2025-03-17T17:40:38.859182265Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" returns successfully" Mar 17 17:40:38.860070 containerd[1595]: time="2025-03-17T17:40:38.859698404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:6,}" Mar 17 17:40:38.860242 kubelet[2894]: I0317 17:40:38.860193 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3" Mar 17 17:40:38.860847 containerd[1595]: time="2025-03-17T17:40:38.860814660Z" level=info msg="StopPodSandbox for \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\"" Mar 17 17:40:38.861066 containerd[1595]: time="2025-03-17T17:40:38.861034250Z" level=info msg="Ensure that sandbox 56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3 in task-service has been cleanup successfully" Mar 17 17:40:38.862267 containerd[1595]: time="2025-03-17T17:40:38.861587409Z" level=info msg="TearDown network for sandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\" successfully" Mar 17 17:40:38.862267 containerd[1595]: time="2025-03-17T17:40:38.861615543Z" level=info msg="StopPodSandbox for \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\" returns successfully" Mar 17 17:40:38.865529 containerd[1595]: time="2025-03-17T17:40:38.864803796Z" level=info msg="StopPodSandbox for \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\"" Mar 17 17:40:38.866175 systemd[1]: run-netns-cni\x2d683c1e3d\x2d6f93\x2dda10\x2deda8\x2df5b4e2d55486.mount: Deactivated successfully. Mar 17 17:40:38.866337 containerd[1595]: time="2025-03-17T17:40:38.866249413Z" level=info msg="TearDown network for sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\" successfully" Mar 17 17:40:38.866337 containerd[1595]: time="2025-03-17T17:40:38.866270804Z" level=info msg="StopPodSandbox for \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\" returns successfully" Mar 17 17:40:38.867285 containerd[1595]: time="2025-03-17T17:40:38.867255409Z" level=info msg="StopPodSandbox for \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\"" Mar 17 17:40:38.867522 containerd[1595]: time="2025-03-17T17:40:38.867362604Z" level=info msg="TearDown network for sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\" successfully" Mar 17 17:40:38.867522 containerd[1595]: time="2025-03-17T17:40:38.867379467Z" level=info msg="StopPodSandbox for \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\" returns successfully" Mar 17 17:40:38.868217 kubelet[2894]: E0317 17:40:38.868191 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:38.869568 containerd[1595]: time="2025-03-17T17:40:38.869514643Z" level=info msg="StopPodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\"" Mar 17 17:40:38.869690 containerd[1595]: time="2025-03-17T17:40:38.869668317Z" level=info msg="TearDown network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" successfully" Mar 17 17:40:38.869690 containerd[1595]: time="2025-03-17T17:40:38.869686593Z" level=info msg="StopPodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" returns successfully" Mar 17 17:40:38.877971 containerd[1595]: time="2025-03-17T17:40:38.877911487Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\"" Mar 17 17:40:38.878770 containerd[1595]: time="2025-03-17T17:40:38.878734463Z" level=info msg="TearDown network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" successfully" Mar 17 17:40:38.878770 containerd[1595]: time="2025-03-17T17:40:38.878759932Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" returns successfully" Mar 17 17:40:38.880072 containerd[1595]: time="2025-03-17T17:40:38.879642210Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\"" Mar 17 17:40:38.880072 containerd[1595]: time="2025-03-17T17:40:38.879831222Z" level=info msg="TearDown network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" successfully" Mar 17 17:40:38.880072 containerd[1595]: time="2025-03-17T17:40:38.879848004Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" returns successfully" Mar 17 17:40:38.881763 kubelet[2894]: E0317 17:40:38.881725 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:38.883638 containerd[1595]: time="2025-03-17T17:40:38.883191615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:6,}" Mar 17 17:40:38.885957 kubelet[2894]: I0317 17:40:38.884973 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb" Mar 17 17:40:38.887068 containerd[1595]: time="2025-03-17T17:40:38.887017748Z" level=info msg="StopPodSandbox for \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\"" Mar 17 17:40:38.889642 kubelet[2894]: I0317 17:40:38.889163 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b" Mar 17 17:40:38.891126 containerd[1595]: time="2025-03-17T17:40:38.890510594Z" level=info msg="StopPodSandbox for \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\"" Mar 17 17:40:38.891126 containerd[1595]: time="2025-03-17T17:40:38.890787054Z" level=info msg="Ensure that sandbox dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b in task-service has been cleanup successfully" Mar 17 17:40:38.891126 containerd[1595]: time="2025-03-17T17:40:38.890925248Z" level=info msg="Ensure that sandbox 942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb in task-service has been cleanup successfully" Mar 17 17:40:38.896603 containerd[1595]: time="2025-03-17T17:40:38.896565264Z" level=info msg="TearDown network for sandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\" successfully" Mar 17 17:40:38.896784 containerd[1595]: time="2025-03-17T17:40:38.896764295Z" level=info msg="StopPodSandbox for \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\" returns successfully" Mar 17 17:40:38.897884 systemd[1]: run-netns-cni\x2dd804c6e2\x2dfa95\x2d7f89\x2d4fc5\x2d262581e168aa.mount: Deactivated successfully. Mar 17 17:40:38.898149 systemd[1]: run-netns-cni\x2d00cdd910\x2de0a1\x2d5681\x2d0db9\x2dd5b5e13ea54f.mount: Deactivated successfully. Mar 17 17:40:38.900311 containerd[1595]: time="2025-03-17T17:40:38.898647430Z" level=info msg="TearDown network for sandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\" successfully" Mar 17 17:40:38.900311 containerd[1595]: time="2025-03-17T17:40:38.898674412Z" level=info msg="StopPodSandbox for \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\" returns successfully" Mar 17 17:40:38.906640 containerd[1595]: time="2025-03-17T17:40:38.905279094Z" level=info msg="StopPodSandbox for \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\"" Mar 17 17:40:38.906640 containerd[1595]: time="2025-03-17T17:40:38.905444962Z" level=info msg="TearDown network for sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\" successfully" Mar 17 17:40:38.906640 containerd[1595]: time="2025-03-17T17:40:38.905460341Z" level=info msg="StopPodSandbox for \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\" returns successfully" Mar 17 17:40:38.906640 containerd[1595]: time="2025-03-17T17:40:38.906574784Z" level=info msg="StopPodSandbox for \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\"" Mar 17 17:40:38.906840 containerd[1595]: time="2025-03-17T17:40:38.906686699Z" level=info msg="TearDown network for sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\" successfully" Mar 17 17:40:38.906840 containerd[1595]: time="2025-03-17T17:40:38.906702659Z" level=info msg="StopPodSandbox for \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\" returns successfully" Mar 17 17:40:38.907374 containerd[1595]: time="2025-03-17T17:40:38.907348095Z" level=info msg="StopPodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\"" Mar 17 17:40:38.907454 containerd[1595]: time="2025-03-17T17:40:38.907444068Z" level=info msg="TearDown network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" successfully" Mar 17 17:40:38.907497 containerd[1595]: time="2025-03-17T17:40:38.907457414Z" level=info msg="StopPodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" returns successfully" Mar 17 17:40:38.908850 kubelet[2894]: I0317 17:40:38.908710 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f" Mar 17 17:40:38.908938 containerd[1595]: time="2025-03-17T17:40:38.908861762Z" level=info msg="StopPodSandbox for \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\"" Mar 17 17:40:38.909047 containerd[1595]: time="2025-03-17T17:40:38.909019153Z" level=info msg="TearDown network for sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\" successfully" Mar 17 17:40:38.909112 containerd[1595]: time="2025-03-17T17:40:38.909047788Z" level=info msg="StopPodSandbox for \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\" returns successfully" Mar 17 17:40:38.909477 containerd[1595]: time="2025-03-17T17:40:38.909437455Z" level=info msg="StopPodSandbox for \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\"" Mar 17 17:40:38.909712 containerd[1595]: time="2025-03-17T17:40:38.909680519Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\"" Mar 17 17:40:38.909783 containerd[1595]: time="2025-03-17T17:40:38.909703794Z" level=info msg="Ensure that sandbox 0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f in task-service has been cleanup successfully" Mar 17 17:40:38.909815 containerd[1595]: time="2025-03-17T17:40:38.909793636Z" level=info msg="TearDown network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" successfully" Mar 17 17:40:38.909815 containerd[1595]: time="2025-03-17T17:40:38.909809045Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" returns successfully" Mar 17 17:40:38.910124 containerd[1595]: time="2025-03-17T17:40:38.910055157Z" level=info msg="StopPodSandbox for \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\"" Mar 17 17:40:38.911345 containerd[1595]: time="2025-03-17T17:40:38.910209282Z" level=info msg="TearDown network for sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\" successfully" Mar 17 17:40:38.911345 containerd[1595]: time="2025-03-17T17:40:38.910243467Z" level=info msg="StopPodSandbox for \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\" returns successfully" Mar 17 17:40:38.911407 containerd[1595]: time="2025-03-17T17:40:38.911348082Z" level=info msg="TearDown network for sandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\" successfully" Mar 17 17:40:38.911407 containerd[1595]: time="2025-03-17T17:40:38.911369884Z" level=info msg="StopPodSandbox for \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\" returns successfully" Mar 17 17:40:38.911466 containerd[1595]: time="2025-03-17T17:40:38.911446380Z" level=info msg="StopPodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\"" Mar 17 17:40:38.912405 containerd[1595]: time="2025-03-17T17:40:38.911550610Z" level=info msg="TearDown network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" successfully" Mar 17 17:40:38.912405 containerd[1595]: time="2025-03-17T17:40:38.911565187Z" level=info msg="StopPodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" returns successfully" Mar 17 17:40:38.912405 containerd[1595]: time="2025-03-17T17:40:38.911871373Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\"" Mar 17 17:40:38.912405 containerd[1595]: time="2025-03-17T17:40:38.911982847Z" level=info msg="TearDown network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" successfully" Mar 17 17:40:38.912405 containerd[1595]: time="2025-03-17T17:40:38.912002384Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" returns successfully" Mar 17 17:40:38.912405 containerd[1595]: time="2025-03-17T17:40:38.912167300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:5,}" Mar 17 17:40:38.912619 containerd[1595]: time="2025-03-17T17:40:38.912446244Z" level=info msg="StopPodSandbox for \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\"" Mar 17 17:40:38.912619 containerd[1595]: time="2025-03-17T17:40:38.912546786Z" level=info msg="TearDown network for sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\" successfully" Mar 17 17:40:38.912619 containerd[1595]: time="2025-03-17T17:40:38.912562987Z" level=info msg="StopPodSandbox for \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\" returns successfully" Mar 17 17:40:38.913329 containerd[1595]: time="2025-03-17T17:40:38.913083163Z" level=info msg="StopPodSandbox for \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\"" Mar 17 17:40:38.913329 containerd[1595]: time="2025-03-17T17:40:38.913203744Z" level=info msg="TearDown network for sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\" successfully" Mar 17 17:40:38.913329 containerd[1595]: time="2025-03-17T17:40:38.913217049Z" level=info msg="StopPodSandbox for \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\" returns successfully" Mar 17 17:40:38.913329 containerd[1595]: time="2025-03-17T17:40:38.913298244Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\"" Mar 17 17:40:38.913489 containerd[1595]: time="2025-03-17T17:40:38.913373719Z" level=info msg="TearDown network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" successfully" Mar 17 17:40:38.913489 containerd[1595]: time="2025-03-17T17:40:38.913386203Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" returns successfully" Mar 17 17:40:38.914678 containerd[1595]: time="2025-03-17T17:40:38.914635404Z" level=info msg="StopPodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\"" Mar 17 17:40:38.914768 containerd[1595]: time="2025-03-17T17:40:38.914739814Z" level=info msg="TearDown network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" successfully" Mar 17 17:40:38.914768 containerd[1595]: time="2025-03-17T17:40:38.914758781Z" level=info msg="StopPodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" returns successfully" Mar 17 17:40:38.915658 containerd[1595]: time="2025-03-17T17:40:38.915599480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:6,}" Mar 17 17:40:38.916562 containerd[1595]: time="2025-03-17T17:40:38.916512528Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\"" Mar 17 17:40:38.916914 containerd[1595]: time="2025-03-17T17:40:38.916888877Z" level=info msg="TearDown network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" successfully" Mar 17 17:40:38.916914 containerd[1595]: time="2025-03-17T17:40:38.916910870Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" returns successfully" Mar 17 17:40:38.923266 containerd[1595]: time="2025-03-17T17:40:38.922608967Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\"" Mar 17 17:40:38.923266 containerd[1595]: time="2025-03-17T17:40:38.922790695Z" level=info msg="TearDown network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" successfully" Mar 17 17:40:38.923266 containerd[1595]: time="2025-03-17T17:40:38.922804772Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" returns successfully" Mar 17 17:40:38.923712 kubelet[2894]: E0317 17:40:38.923684 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:38.925858 containerd[1595]: time="2025-03-17T17:40:38.925810696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:6,}" Mar 17 17:40:38.931730 kubelet[2894]: I0317 17:40:38.931639 2894 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7" Mar 17 17:40:38.933562 containerd[1595]: time="2025-03-17T17:40:38.932862325Z" level=info msg="StopPodSandbox for \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\"" Mar 17 17:40:38.933562 containerd[1595]: time="2025-03-17T17:40:38.933368383Z" level=info msg="Ensure that sandbox 77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7 in task-service has been cleanup successfully" Mar 17 17:40:38.933752 containerd[1595]: time="2025-03-17T17:40:38.933724014Z" level=info msg="TearDown network for sandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\" successfully" Mar 17 17:40:38.933752 containerd[1595]: time="2025-03-17T17:40:38.933749192Z" level=info msg="StopPodSandbox for \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\" returns successfully" Mar 17 17:40:38.934256 containerd[1595]: time="2025-03-17T17:40:38.934204103Z" level=info msg="StopPodSandbox for \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\"" Mar 17 17:40:38.935495 containerd[1595]: time="2025-03-17T17:40:38.935408388Z" level=info msg="TearDown network for sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\" successfully" Mar 17 17:40:38.935495 containerd[1595]: time="2025-03-17T17:40:38.935479434Z" level=info msg="StopPodSandbox for \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\" returns successfully" Mar 17 17:40:38.936488 containerd[1595]: time="2025-03-17T17:40:38.936432548Z" level=info msg="StopPodSandbox for \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\"" Mar 17 17:40:38.936617 containerd[1595]: time="2025-03-17T17:40:38.936564210Z" level=info msg="TearDown network for sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\" successfully" Mar 17 17:40:38.936617 containerd[1595]: time="2025-03-17T17:40:38.936608876Z" level=info msg="StopPodSandbox for \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\" returns successfully" Mar 17 17:40:38.937019 containerd[1595]: time="2025-03-17T17:40:38.936986960Z" level=info msg="StopPodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\"" Mar 17 17:40:38.937147 containerd[1595]: time="2025-03-17T17:40:38.937092432Z" level=info msg="TearDown network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" successfully" Mar 17 17:40:38.937147 containerd[1595]: time="2025-03-17T17:40:38.937139121Z" level=info msg="StopPodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" returns successfully" Mar 17 17:40:38.937844 containerd[1595]: time="2025-03-17T17:40:38.937613870Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\"" Mar 17 17:40:38.937844 containerd[1595]: time="2025-03-17T17:40:38.937725574Z" level=info msg="TearDown network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" successfully" Mar 17 17:40:38.937844 containerd[1595]: time="2025-03-17T17:40:38.937765090Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" returns successfully" Mar 17 17:40:38.940289 containerd[1595]: time="2025-03-17T17:40:38.938056557Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\"" Mar 17 17:40:38.940289 containerd[1595]: time="2025-03-17T17:40:38.938169172Z" level=info msg="TearDown network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" successfully" Mar 17 17:40:38.940289 containerd[1595]: time="2025-03-17T17:40:38.938182508Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" returns successfully" Mar 17 17:40:38.940289 containerd[1595]: time="2025-03-17T17:40:38.938647959Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" Mar 17 17:40:38.940289 containerd[1595]: time="2025-03-17T17:40:38.938809219Z" level=info msg="TearDown network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" successfully" Mar 17 17:40:38.940289 containerd[1595]: time="2025-03-17T17:40:38.938823325Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" returns successfully" Mar 17 17:40:38.940289 containerd[1595]: time="2025-03-17T17:40:38.939474221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:7,}" Mar 17 17:40:39.244836 kubelet[2894]: I0317 17:40:39.244760 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xzl4f" podStartSLOduration=2.622382832 podStartE2EDuration="31.244739461s" podCreationTimestamp="2025-03-17 17:40:08 +0000 UTC" firstStartedPulling="2025-03-17 17:40:09.193523456 +0000 UTC m=+20.488187315" lastFinishedPulling="2025-03-17 17:40:37.815880085 +0000 UTC m=+49.110543944" observedRunningTime="2025-03-17 17:40:39.244139222 +0000 UTC m=+50.538803111" watchObservedRunningTime="2025-03-17 17:40:39.244739461 +0000 UTC m=+50.539403320" Mar 17 17:40:39.418413 systemd[1]: run-containerd-runc-k8s.io-024e8933a2d8caa418b55bc5ff261d899ca91f0ddd5bf9bc59c977c7e89d4ea1-runc.AhvJnd.mount: Deactivated successfully. Mar 17 17:40:39.418680 systemd[1]: run-netns-cni\x2decd16ffc\x2d3916\x2d52b3\x2dd3e4\x2d8e76dd45627d.mount: Deactivated successfully. Mar 17 17:40:39.418873 systemd[1]: run-netns-cni\x2d130775a8\x2d72ad\x2db2e0\x2d52cc\x2d42df5ddb3ef5.mount: Deactivated successfully. Mar 17 17:40:39.739576 systemd[1]: Started sshd@10-10.0.0.27:22-10.0.0.1:33182.service - OpenSSH per-connection server daemon (10.0.0.1:33182). Mar 17 17:40:39.877784 sshd[5121]: Accepted publickey for core from 10.0.0.1 port 33182 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:39.879818 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:39.884397 systemd-logind[1578]: New session 11 of user core. Mar 17 17:40:39.894595 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:40:39.935332 kubelet[2894]: E0317 17:40:39.935266 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:39.961866 systemd[1]: run-containerd-runc-k8s.io-024e8933a2d8caa418b55bc5ff261d899ca91f0ddd5bf9bc59c977c7e89d4ea1-runc.7i4EZM.mount: Deactivated successfully. Mar 17 17:40:40.195966 sshd[5124]: Connection closed by 10.0.0.1 port 33182 Mar 17 17:40:40.196864 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:40.201441 systemd[1]: sshd@10-10.0.0.27:22-10.0.0.1:33182.service: Deactivated successfully. Mar 17 17:40:40.205185 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:40:40.206042 systemd-logind[1578]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:40:40.206960 systemd-logind[1578]: Removed session 11. Mar 17 17:40:43.658033 systemd-networkd[1244]: caliea523948aa0: Link UP Mar 17 17:40:43.658956 systemd-networkd[1244]: caliea523948aa0: Gained carrier Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:41.705 [INFO][5160] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:41.793 [INFO][5160] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0 calico-apiserver-779d48f5d9- calico-apiserver 05bc58a2-8b10-4350-b41e-7b091d9a3a8c 782 0 2025-03-17 17:40:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:779d48f5d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-779d48f5d9-9lw4k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliea523948aa0 [] []}} ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-9lw4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:41.793 [INFO][5160] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-9lw4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.032 [INFO][5184] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" HandleID="k8s-pod-network.597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Workload="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.091 [INFO][5184] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" HandleID="k8s-pod-network.597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Workload="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000613830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-779d48f5d9-9lw4k", "timestamp":"2025-03-17 17:40:42.032501377 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.091 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.091 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.092 [INFO][5184] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.093 [INFO][5184] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" host="localhost" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.099 [INFO][5184] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.102 [INFO][5184] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.103 [INFO][5184] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.105 [INFO][5184] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.105 [INFO][5184] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" host="localhost" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.107 [INFO][5184] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1 Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.386 [INFO][5184] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" host="localhost" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.670 [INFO][5184] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" host="localhost" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.671 [INFO][5184] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" host="localhost" Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.671 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:40:43.794143 containerd[1595]: 2025-03-17 17:40:42.671 [INFO][5184] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" HandleID="k8s-pod-network.597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Workload="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0" Mar 17 17:40:43.795117 containerd[1595]: 2025-03-17 17:40:42.674 [INFO][5160] cni-plugin/k8s.go 386: Populated endpoint ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-9lw4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0", GenerateName:"calico-apiserver-779d48f5d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"05bc58a2-8b10-4350-b41e-7b091d9a3a8c", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"779d48f5d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-779d48f5d9-9lw4k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea523948aa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:43.795117 containerd[1595]: 2025-03-17 17:40:42.674 [INFO][5160] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-9lw4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0" Mar 17 17:40:43.795117 containerd[1595]: 2025-03-17 17:40:42.674 [INFO][5160] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea523948aa0 ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-9lw4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0" Mar 17 17:40:43.795117 containerd[1595]: 2025-03-17 17:40:43.657 [INFO][5160] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-9lw4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0" Mar 17 17:40:43.795117 containerd[1595]: 2025-03-17 17:40:43.657 [INFO][5160] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-9lw4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0", GenerateName:"calico-apiserver-779d48f5d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"05bc58a2-8b10-4350-b41e-7b091d9a3a8c", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"779d48f5d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1", Pod:"calico-apiserver-779d48f5d9-9lw4k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea523948aa0", MAC:"ee:04:2c:18:de:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:43.795117 containerd[1595]: 2025-03-17 17:40:43.790 [INFO][5160] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-9lw4k" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--9lw4k-eth0" Mar 17 17:40:44.317448 systemd-networkd[1244]: calie9037c2f6de: Link UP Mar 17 17:40:44.319842 systemd-networkd[1244]: calie9037c2f6de: Gained carrier Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:42.464 [INFO][5192] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:42.796 [INFO][5192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0 calico-kube-controllers-5b6b58f89d- calico-system 43ddfd49-802e-4437-b6f0-ed427cdd6be8 785 0 2025-03-17 17:40:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b6b58f89d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5b6b58f89d-g52xg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie9037c2f6de [] []}} ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Namespace="calico-system" Pod="calico-kube-controllers-5b6b58f89d-g52xg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:42.796 [INFO][5192] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Namespace="calico-system" Pod="calico-kube-controllers-5b6b58f89d-g52xg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:43.712 [INFO][5222] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" HandleID="k8s-pod-network.40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Workload="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.141 [INFO][5222] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" HandleID="k8s-pod-network.40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Workload="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003baad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5b6b58f89d-g52xg", "timestamp":"2025-03-17 17:40:43.712592815 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.141 [INFO][5222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.141 [INFO][5222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.141 [INFO][5222] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.144 [INFO][5222] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" host="localhost" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.150 [INFO][5222] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.157 [INFO][5222] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.159 [INFO][5222] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.160 [INFO][5222] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.161 [INFO][5222] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" host="localhost" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.162 [INFO][5222] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.206 [INFO][5222] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" host="localhost" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.305 [INFO][5222] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" host="localhost" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.306 [INFO][5222] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" host="localhost" Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.306 [INFO][5222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:40:44.372169 containerd[1595]: 2025-03-17 17:40:44.306 [INFO][5222] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" HandleID="k8s-pod-network.40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Workload="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0" Mar 17 17:40:44.373108 containerd[1595]: 2025-03-17 17:40:44.311 [INFO][5192] cni-plugin/k8s.go 386: Populated endpoint ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Namespace="calico-system" Pod="calico-kube-controllers-5b6b58f89d-g52xg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0", GenerateName:"calico-kube-controllers-5b6b58f89d-", Namespace:"calico-system", SelfLink:"", UID:"43ddfd49-802e-4437-b6f0-ed427cdd6be8", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b6b58f89d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5b6b58f89d-g52xg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9037c2f6de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:44.373108 containerd[1595]: 2025-03-17 17:40:44.311 [INFO][5192] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Namespace="calico-system" Pod="calico-kube-controllers-5b6b58f89d-g52xg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0" Mar 17 17:40:44.373108 containerd[1595]: 2025-03-17 17:40:44.311 [INFO][5192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9037c2f6de ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Namespace="calico-system" Pod="calico-kube-controllers-5b6b58f89d-g52xg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0" Mar 17 17:40:44.373108 containerd[1595]: 2025-03-17 17:40:44.321 [INFO][5192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Namespace="calico-system" Pod="calico-kube-controllers-5b6b58f89d-g52xg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0" Mar 17 17:40:44.373108 containerd[1595]: 2025-03-17 17:40:44.326 [INFO][5192] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Namespace="calico-system" Pod="calico-kube-controllers-5b6b58f89d-g52xg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0", GenerateName:"calico-kube-controllers-5b6b58f89d-", Namespace:"calico-system", SelfLink:"", UID:"43ddfd49-802e-4437-b6f0-ed427cdd6be8", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b6b58f89d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c", Pod:"calico-kube-controllers-5b6b58f89d-g52xg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9037c2f6de", MAC:"b2:77:92:fd:89:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:44.373108 containerd[1595]: 2025-03-17 17:40:44.367 [INFO][5192] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c" Namespace="calico-system" Pod="calico-kube-controllers-5b6b58f89d-g52xg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b6b58f89d--g52xg-eth0" Mar 17 17:40:44.395253 kernel: bpftool[5385]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:40:44.910762 systemd-networkd[1244]: vxlan.calico: Link UP Mar 17 17:40:44.911177 systemd-networkd[1244]: vxlan.calico: Gained carrier Mar 17 17:40:44.920102 systemd-networkd[1244]: calif2f6f87baee: Link UP Mar 17 17:40:44.920328 systemd-networkd[1244]: calif2f6f87baee: Gained carrier Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.027 [INFO][5327] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.143 [INFO][5327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0 coredns-7db6d8ff4d- kube-system 1cbd3c90-0c66-408d-9e5d-1382eccfbde6 780 0 2025-03-17 17:40:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-5xpt7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2f6f87baee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xpt7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xpt7-" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.143 [INFO][5327] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xpt7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.186 [INFO][5344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" HandleID="k8s-pod-network.9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Workload="localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.209 [INFO][5344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" HandleID="k8s-pod-network.9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Workload="localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000114510), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-5xpt7", "timestamp":"2025-03-17 17:40:44.186750844 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.209 [INFO][5344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.307 [INFO][5344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.307 [INFO][5344] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.315 [INFO][5344] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" host="localhost" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.332 [INFO][5344] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.370 [INFO][5344] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.375 [INFO][5344] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.394 [INFO][5344] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.394 [INFO][5344] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" host="localhost" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.399 [INFO][5344] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273 Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.491 [INFO][5344] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" host="localhost" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.903 [INFO][5344] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" host="localhost" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.903 [INFO][5344] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" host="localhost" Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.903 [INFO][5344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:40:44.982045 containerd[1595]: 2025-03-17 17:40:44.903 [INFO][5344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" HandleID="k8s-pod-network.9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Workload="localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0" Mar 17 17:40:44.983067 containerd[1595]: 2025-03-17 17:40:44.908 [INFO][5327] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xpt7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1cbd3c90-0c66-408d-9e5d-1382eccfbde6", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-5xpt7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2f6f87baee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:44.983067 containerd[1595]: 2025-03-17 17:40:44.908 [INFO][5327] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xpt7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0" Mar 17 17:40:44.983067 containerd[1595]: 2025-03-17 17:40:44.908 [INFO][5327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2f6f87baee ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xpt7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0" Mar 17 17:40:44.983067 containerd[1595]: 2025-03-17 17:40:44.917 [INFO][5327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xpt7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0" Mar 17 17:40:44.983067 containerd[1595]: 2025-03-17 17:40:44.918 [INFO][5327] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xpt7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1cbd3c90-0c66-408d-9e5d-1382eccfbde6", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273", Pod:"coredns-7db6d8ff4d-5xpt7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2f6f87baee", MAC:"a6:14:d6:e4:23:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:44.983067 containerd[1595]: 2025-03-17 17:40:44.976 [INFO][5327] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xpt7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xpt7-eth0" Mar 17 17:40:45.205379 containerd[1595]: time="2025-03-17T17:40:45.205285788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:45.205599 containerd[1595]: time="2025-03-17T17:40:45.205355641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:45.205599 containerd[1595]: time="2025-03-17T17:40:45.205367473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:45.205599 containerd[1595]: time="2025-03-17T17:40:45.205468276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:45.209526 systemd[1]: Started sshd@11-10.0.0.27:22-10.0.0.1:33188.service - OpenSSH per-connection server daemon (10.0.0.1:33188). Mar 17 17:40:45.219537 containerd[1595]: time="2025-03-17T17:40:45.219411688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:45.219973 containerd[1595]: time="2025-03-17T17:40:45.219888948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:45.220109 containerd[1595]: time="2025-03-17T17:40:45.220071075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:45.220479 containerd[1595]: time="2025-03-17T17:40:45.220408900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:45.267645 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:40:45.274383 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:40:45.286333 sshd[5542]: Accepted publickey for core from 10.0.0.1 port 33188 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:45.289809 sshd-session[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:45.302284 systemd-logind[1578]: New session 12 of user core. Mar 17 17:40:45.308629 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:40:45.334703 containerd[1595]: time="2025-03-17T17:40:45.334655164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b6b58f89d-g52xg,Uid:43ddfd49-802e-4437-b6f0-ed427cdd6be8,Namespace:calico-system,Attempt:6,} returns sandbox id \"40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c\"" Mar 17 17:40:45.336494 containerd[1595]: time="2025-03-17T17:40:45.336450520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 17 17:40:45.350969 containerd[1595]: time="2025-03-17T17:40:45.350567152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:45.350969 containerd[1595]: time="2025-03-17T17:40:45.350637888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:45.350969 containerd[1595]: time="2025-03-17T17:40:45.350652295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:45.350969 containerd[1595]: time="2025-03-17T17:40:45.350806419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:45.361808 containerd[1595]: time="2025-03-17T17:40:45.361632022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-9lw4k,Uid:05bc58a2-8b10-4350-b41e-7b091d9a3a8c,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1\"" Mar 17 17:40:45.393609 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:40:45.424824 containerd[1595]: time="2025-03-17T17:40:45.424711568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xpt7,Uid:1cbd3c90-0c66-408d-9e5d-1382eccfbde6,Namespace:kube-system,Attempt:6,} returns sandbox id \"9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273\"" Mar 17 17:40:45.425896 kubelet[2894]: E0317 17:40:45.425592 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:45.427964 containerd[1595]: time="2025-03-17T17:40:45.427824919Z" level=info msg="CreateContainer within sandbox \"9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:40:45.447425 systemd-networkd[1244]: caliea523948aa0: Gained IPv6LL Mar 17 17:40:45.532259 systemd-journald[1154]: Under memory pressure, flushing caches. Mar 17 17:40:45.511605 systemd-networkd[1244]: calie9037c2f6de: Gained IPv6LL Mar 17 17:40:45.512467 systemd-resolved[1460]: Under memory pressure, flushing caches. Mar 17 17:40:45.512488 systemd-resolved[1460]: Flushed all caches. Mar 17 17:40:45.716235 systemd-networkd[1244]: cali6deb60eb845: Link UP Mar 17 17:40:45.716738 systemd-networkd[1244]: cali6deb60eb845: Gained carrier Mar 17 17:40:45.725631 sshd[5617]: Connection closed by 10.0.0.1 port 33188 Mar 17 17:40:45.728436 sshd-session[5542]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:45.737443 systemd[1]: Started sshd@12-10.0.0.27:22-10.0.0.1:46058.service - OpenSSH per-connection server daemon (10.0.0.1:46058). Mar 17 17:40:45.737898 systemd[1]: sshd@11-10.0.0.27:22-10.0.0.1:33188.service: Deactivated successfully. Mar 17 17:40:45.741646 systemd-logind[1578]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:40:45.742519 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:40:45.743428 systemd-logind[1578]: Removed session 12. Mar 17 17:40:45.769986 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 46058 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:45.771544 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:45.775858 systemd-logind[1578]: New session 13 of user core. Mar 17 17:40:45.786505 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.128 [INFO][5450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0 calico-apiserver-779d48f5d9- calico-apiserver c6ebfa09-1d89-41a1-975e-0d041b544630 783 0 2025-03-17 17:40:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:779d48f5d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-779d48f5d9-dsbpp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6deb60eb845 [] []}} ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-dsbpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.128 [INFO][5450] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-dsbpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.208 [INFO][5492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" HandleID="k8s-pod-network.c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Workload="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.300 [INFO][5492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" HandleID="k8s-pod-network.c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Workload="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003984d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-779d48f5d9-dsbpp", "timestamp":"2025-03-17 17:40:45.208576816 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.300 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.301 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.301 [INFO][5492] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.302 [INFO][5492] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" host="localhost" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.306 [INFO][5492] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.319 [INFO][5492] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.324 [INFO][5492] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.329 [INFO][5492] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.329 [INFO][5492] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" host="localhost" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.330 [INFO][5492] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8 Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.387 [INFO][5492] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" host="localhost" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.710 [INFO][5492] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" host="localhost" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.710 [INFO][5492] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" host="localhost" Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.711 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:40:45.866186 containerd[1595]: 2025-03-17 17:40:45.711 [INFO][5492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" HandleID="k8s-pod-network.c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Workload="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0" Mar 17 17:40:45.867096 containerd[1595]: 2025-03-17 17:40:45.713 [INFO][5450] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-dsbpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0", GenerateName:"calico-apiserver-779d48f5d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6ebfa09-1d89-41a1-975e-0d041b544630", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"779d48f5d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-779d48f5d9-dsbpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6deb60eb845", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:45.867096 containerd[1595]: 2025-03-17 17:40:45.713 [INFO][5450] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-dsbpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0" Mar 17 17:40:45.867096 containerd[1595]: 2025-03-17 17:40:45.713 [INFO][5450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6deb60eb845 ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-dsbpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0" Mar 17 17:40:45.867096 containerd[1595]: 2025-03-17 17:40:45.716 [INFO][5450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-dsbpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0" Mar 17 17:40:45.867096 containerd[1595]: 2025-03-17 17:40:45.716 [INFO][5450] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-dsbpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0", GenerateName:"calico-apiserver-779d48f5d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c6ebfa09-1d89-41a1-975e-0d041b544630", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"779d48f5d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8", Pod:"calico-apiserver-779d48f5d9-dsbpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6deb60eb845", MAC:"6a:79:03:f5:1d:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:45.867096 containerd[1595]: 2025-03-17 17:40:45.861 [INFO][5450] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8" Namespace="calico-apiserver" Pod="calico-apiserver-779d48f5d9-dsbpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--779d48f5d9--dsbpp-eth0" Mar 17 17:40:45.961478 containerd[1595]: time="2025-03-17T17:40:45.961332371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:45.962136 containerd[1595]: time="2025-03-17T17:40:45.961968505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:45.962136 containerd[1595]: time="2025-03-17T17:40:45.961991719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:45.962136 containerd[1595]: time="2025-03-17T17:40:45.962082092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:45.988366 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:40:46.023348 containerd[1595]: time="2025-03-17T17:40:46.023304647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-779d48f5d9-dsbpp,Uid:c6ebfa09-1d89-41a1-975e-0d041b544630,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8\"" Mar 17 17:40:46.138235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939925432.mount: Deactivated successfully. Mar 17 17:40:46.412634 containerd[1595]: time="2025-03-17T17:40:46.412473492Z" level=info msg="CreateContainer within sandbox \"9e2cc64a5eba3fa5af778a66103aa86f53a2f8bb7bf68fa79cad12c3de703273\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89ddfaa109a325390a3c832948a9b6e9022ace65bbeb7178a23e84e7fdc82747\"" Mar 17 17:40:46.413882 containerd[1595]: time="2025-03-17T17:40:46.413765166Z" level=info msg="StartContainer for \"89ddfaa109a325390a3c832948a9b6e9022ace65bbeb7178a23e84e7fdc82747\"" Mar 17 17:40:46.426274 sshd[5702]: Connection closed by 10.0.0.1 port 46058 Mar 17 17:40:46.426492 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:46.439783 systemd[1]: Started sshd@13-10.0.0.27:22-10.0.0.1:46074.service - OpenSSH per-connection server daemon (10.0.0.1:46074). Mar 17 17:40:46.440416 systemd[1]: sshd@12-10.0.0.27:22-10.0.0.1:46058.service: Deactivated successfully. Mar 17 17:40:46.450294 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:40:46.455846 systemd-logind[1578]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:40:46.458918 systemd-logind[1578]: Removed session 13. Mar 17 17:40:46.465668 systemd-networkd[1244]: caliddb0b9eb113: Link UP Mar 17 17:40:46.466918 systemd-networkd[1244]: caliddb0b9eb113: Gained carrier Mar 17 17:40:46.496995 sshd[5769]: Accepted publickey for core from 10.0.0.1 port 46074 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:46.498846 sshd-session[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:46.503500 systemd-logind[1578]: New session 14 of user core. Mar 17 17:40:46.513580 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:45.127 [INFO][5467] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--24zxx-eth0 csi-node-driver- calico-system e6243402-8f9c-4b35-b2c7-317fe823ae81 608 0 2025-03-17 17:40:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-24zxx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliddb0b9eb113 [] []}} ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Namespace="calico-system" Pod="csi-node-driver-24zxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--24zxx-" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:45.127 [INFO][5467] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Namespace="calico-system" Pod="csi-node-driver-24zxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--24zxx-eth0" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:45.203 [INFO][5493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" HandleID="k8s-pod-network.ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Workload="localhost-k8s-csi--node--driver--24zxx-eth0" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:45.300 [INFO][5493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" HandleID="k8s-pod-network.ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Workload="localhost-k8s-csi--node--driver--24zxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000511a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-24zxx", "timestamp":"2025-03-17 17:40:45.203280432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:45.301 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:45.711 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:45.711 [INFO][5493] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:45.723 [INFO][5493] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" host="localhost" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:45.930 [INFO][5493] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:46.331 [INFO][5493] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:46.415 [INFO][5493] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:46.417 [INFO][5493] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:46.418 [INFO][5493] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" host="localhost" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:46.423 [INFO][5493] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:46.430 [INFO][5493] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" host="localhost" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:46.446 [INFO][5493] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" host="localhost" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:46.447 [INFO][5493] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" host="localhost" Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:46.448 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:40:46.589972 containerd[1595]: 2025-03-17 17:40:46.448 [INFO][5493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" HandleID="k8s-pod-network.ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Workload="localhost-k8s-csi--node--driver--24zxx-eth0" Mar 17 17:40:46.590693 containerd[1595]: 2025-03-17 17:40:46.458 [INFO][5467] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Namespace="calico-system" Pod="csi-node-driver-24zxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--24zxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--24zxx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e6243402-8f9c-4b35-b2c7-317fe823ae81", ResourceVersion:"608", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-24zxx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliddb0b9eb113", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:46.590693 containerd[1595]: 2025-03-17 17:40:46.460 [INFO][5467] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Namespace="calico-system" Pod="csi-node-driver-24zxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--24zxx-eth0" Mar 17 17:40:46.590693 containerd[1595]: 2025-03-17 17:40:46.460 [INFO][5467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddb0b9eb113 ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Namespace="calico-system" Pod="csi-node-driver-24zxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--24zxx-eth0" Mar 17 17:40:46.590693 containerd[1595]: 2025-03-17 17:40:46.467 [INFO][5467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Namespace="calico-system" Pod="csi-node-driver-24zxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--24zxx-eth0" Mar 17 17:40:46.590693 containerd[1595]: 2025-03-17 17:40:46.468 [INFO][5467] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Namespace="calico-system" Pod="csi-node-driver-24zxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--24zxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--24zxx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e6243402-8f9c-4b35-b2c7-317fe823ae81", ResourceVersion:"608", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a", Pod:"csi-node-driver-24zxx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliddb0b9eb113", MAC:"6e:22:7e:c7:3e:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:46.590693 containerd[1595]: 2025-03-17 17:40:46.587 [INFO][5467] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a" Namespace="calico-system" Pod="csi-node-driver-24zxx" WorkloadEndpoint="localhost-k8s-csi--node--driver--24zxx-eth0" Mar 17 17:40:46.612649 containerd[1595]: time="2025-03-17T17:40:46.612572141Z" level=info msg="StartContainer for \"89ddfaa109a325390a3c832948a9b6e9022ace65bbeb7178a23e84e7fdc82747\" returns successfully" Mar 17 17:40:46.707818 containerd[1595]: time="2025-03-17T17:40:46.707664246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:46.710765 containerd[1595]: time="2025-03-17T17:40:46.710526174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:46.710765 containerd[1595]: time="2025-03-17T17:40:46.710550972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:46.710765 containerd[1595]: time="2025-03-17T17:40:46.710661152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:46.727862 sshd[5795]: Connection closed by 10.0.0.1 port 46074 Mar 17 17:40:46.726350 sshd-session[5769]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:46.735532 systemd[1]: sshd@13-10.0.0.27:22-10.0.0.1:46074.service: Deactivated successfully. Mar 17 17:40:46.742294 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:40:46.742461 systemd-logind[1578]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:40:46.753034 systemd-logind[1578]: Removed session 14. Mar 17 17:40:46.753212 systemd-networkd[1244]: cali5e818e0fd76: Link UP Mar 17 17:40:46.754342 systemd-networkd[1244]: cali5e818e0fd76: Gained carrier Mar 17 17:40:46.772561 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:45.148 [INFO][5475] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0 coredns-7db6d8ff4d- kube-system e68c1525-3bc8-4435-a253-fa308a8e7604 784 0 2025-03-17 17:40:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-j5l2k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5e818e0fd76 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5l2k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5l2k-" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:45.148 [INFO][5475] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5l2k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:45.251 [INFO][5505] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" HandleID="k8s-pod-network.9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Workload="localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:45.306 [INFO][5505] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" HandleID="k8s-pod-network.9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Workload="localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00053bb40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-j5l2k", "timestamp":"2025-03-17 17:40:45.251748171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:45.310 [INFO][5505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.447 [INFO][5505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.448 [INFO][5505] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.601 [INFO][5505] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" host="localhost" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.693 [INFO][5505] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.704 [INFO][5505] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.707 [INFO][5505] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.710 [INFO][5505] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.710 [INFO][5505] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" host="localhost" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.712 [INFO][5505] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.718 [INFO][5505] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" host="localhost" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.729 [INFO][5505] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" host="localhost" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.729 [INFO][5505] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" host="localhost" Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.729 [INFO][5505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:40:46.785376 containerd[1595]: 2025-03-17 17:40:46.729 [INFO][5505] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" HandleID="k8s-pod-network.9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Workload="localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0" Mar 17 17:40:46.786433 containerd[1595]: 2025-03-17 17:40:46.744 [INFO][5475] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5l2k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e68c1525-3bc8-4435-a253-fa308a8e7604", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-j5l2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e818e0fd76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:46.786433 containerd[1595]: 2025-03-17 17:40:46.744 [INFO][5475] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5l2k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0" Mar 17 17:40:46.786433 containerd[1595]: 2025-03-17 17:40:46.745 [INFO][5475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e818e0fd76 ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5l2k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0" Mar 17 17:40:46.786433 containerd[1595]: 2025-03-17 17:40:46.755 [INFO][5475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5l2k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0" Mar 17 17:40:46.786433 containerd[1595]: 2025-03-17 17:40:46.755 [INFO][5475] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5l2k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e68c1525-3bc8-4435-a253-fa308a8e7604", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f", Pod:"coredns-7db6d8ff4d-j5l2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e818e0fd76", MAC:"f2:22:f7:ab:9b:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:46.786433 containerd[1595]: 2025-03-17 17:40:46.777 [INFO][5475] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j5l2k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j5l2k-eth0" Mar 17 17:40:46.792941 systemd-networkd[1244]: vxlan.calico: Gained IPv6LL Mar 17 17:40:46.799918 containerd[1595]: time="2025-03-17T17:40:46.799767041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24zxx,Uid:e6243402-8f9c-4b35-b2c7-317fe823ae81,Namespace:calico-system,Attempt:5,} returns sandbox id \"ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a\"" Mar 17 17:40:46.928381 systemd-networkd[1244]: calif2f6f87baee: Gained IPv6LL Mar 17 17:40:46.953374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154632734.mount: Deactivated successfully. Mar 17 17:40:46.971814 kubelet[2894]: E0317 17:40:46.971682 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:47.082215 kubelet[2894]: I0317 17:40:47.082124 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5xpt7" podStartSLOduration=46.082099993 podStartE2EDuration="46.082099993s" podCreationTimestamp="2025-03-17 17:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:40:47.081897736 +0000 UTC m=+58.376561606" watchObservedRunningTime="2025-03-17 17:40:47.082099993 +0000 UTC m=+58.376763852" Mar 17 17:40:47.093094 containerd[1595]: time="2025-03-17T17:40:47.092958552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:47.093094 containerd[1595]: time="2025-03-17T17:40:47.093041439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:47.093690 containerd[1595]: time="2025-03-17T17:40:47.093062299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:47.093690 containerd[1595]: time="2025-03-17T17:40:47.093260497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:47.114120 systemd-networkd[1244]: cali6deb60eb845: Gained IPv6LL Mar 17 17:40:47.132493 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:40:47.160572 containerd[1595]: time="2025-03-17T17:40:47.160532436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j5l2k,Uid:e68c1525-3bc8-4435-a253-fa308a8e7604,Namespace:kube-system,Attempt:6,} returns sandbox id \"9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f\"" Mar 17 17:40:47.162132 kubelet[2894]: E0317 17:40:47.161584 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:47.165188 containerd[1595]: time="2025-03-17T17:40:47.165127856Z" level=info msg="CreateContainer within sandbox \"9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:40:47.600314 containerd[1595]: time="2025-03-17T17:40:47.600193002Z" level=info msg="CreateContainer within sandbox \"9c9a271e0e3aa437aeb739f58293e68230b61e705fe2d7fd94915a89ee26131f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c028343e6c92497868953e548a8f804b08c148a006e6d3c0b2f0f131ece1e9d\"" Mar 17 17:40:47.601704 containerd[1595]: time="2025-03-17T17:40:47.601431392Z" level=info msg="StartContainer for \"3c028343e6c92497868953e548a8f804b08c148a006e6d3c0b2f0f131ece1e9d\"" Mar 17 17:40:47.697103 containerd[1595]: time="2025-03-17T17:40:47.696907446Z" level=info msg="StartContainer for \"3c028343e6c92497868953e548a8f804b08c148a006e6d3c0b2f0f131ece1e9d\" returns successfully" Mar 17 17:40:47.990433 kubelet[2894]: E0317 17:40:47.990380 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:47.991038 kubelet[2894]: E0317 17:40:47.990821 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:48.082278 kubelet[2894]: I0317 17:40:48.076343 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j5l2k" podStartSLOduration=47.07631715 podStartE2EDuration="47.07631715s" podCreationTimestamp="2025-03-17 17:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:40:48.004667579 +0000 UTC m=+59.299331439" watchObservedRunningTime="2025-03-17 17:40:48.07631715 +0000 UTC m=+59.370981019" Mar 17 17:40:48.268332 systemd-networkd[1244]: cali5e818e0fd76: Gained IPv6LL Mar 17 17:40:48.457723 systemd-networkd[1244]: caliddb0b9eb113: Gained IPv6LL Mar 17 17:40:48.781293 containerd[1595]: time="2025-03-17T17:40:48.780696275Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" Mar 17 17:40:48.781293 containerd[1595]: time="2025-03-17T17:40:48.780832634Z" level=info msg="TearDown network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" successfully" Mar 17 17:40:48.781293 containerd[1595]: time="2025-03-17T17:40:48.780848695Z" level=info msg="StopPodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" returns successfully" Mar 17 17:40:48.832884 containerd[1595]: time="2025-03-17T17:40:48.832747029Z" level=info msg="RemovePodSandbox for \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" Mar 17 17:40:48.848247 containerd[1595]: time="2025-03-17T17:40:48.847198096Z" level=info msg="Forcibly stopping sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\"" Mar 17 17:40:48.848247 containerd[1595]: time="2025-03-17T17:40:48.847389020Z" level=info msg="TearDown network for sandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" successfully" Mar 17 17:40:48.909472 containerd[1595]: time="2025-03-17T17:40:48.906323761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:48.909932 containerd[1595]: time="2025-03-17T17:40:48.909894555Z" level=info msg="RemovePodSandbox \"fd6dfebbb17b8ec1edcc3a670e197687101f350c8be57ce7dffd6c1bfaccbdbd\" returns successfully" Mar 17 17:40:48.915004 containerd[1595]: time="2025-03-17T17:40:48.914960859Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\"" Mar 17 17:40:48.915631 containerd[1595]: time="2025-03-17T17:40:48.915607712Z" level=info msg="TearDown network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" successfully" Mar 17 17:40:48.915721 containerd[1595]: time="2025-03-17T17:40:48.915707382Z" level=info msg="StopPodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" returns successfully" Mar 17 17:40:48.919554 containerd[1595]: time="2025-03-17T17:40:48.917716891Z" level=info msg="RemovePodSandbox for \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\"" Mar 17 17:40:48.919554 containerd[1595]: time="2025-03-17T17:40:48.917764652Z" level=info msg="Forcibly stopping sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\"" Mar 17 17:40:48.919554 containerd[1595]: time="2025-03-17T17:40:48.917896223Z" level=info msg="TearDown network for sandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" successfully" Mar 17 17:40:48.937062 containerd[1595]: time="2025-03-17T17:40:48.936995857Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:48.937381 containerd[1595]: time="2025-03-17T17:40:48.937360743Z" level=info msg="RemovePodSandbox \"a68bf2fe20fe8241ece0438f3d2d33a6cb6d0512af7d808f8184cdf260b62dad\" returns successfully" Mar 17 17:40:48.939364 containerd[1595]: time="2025-03-17T17:40:48.939205649Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\"" Mar 17 17:40:48.939424 containerd[1595]: time="2025-03-17T17:40:48.939376043Z" level=info msg="TearDown network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" successfully" Mar 17 17:40:48.939424 containerd[1595]: time="2025-03-17T17:40:48.939389159Z" level=info msg="StopPodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" returns successfully" Mar 17 17:40:48.941672 containerd[1595]: time="2025-03-17T17:40:48.939744535Z" level=info msg="RemovePodSandbox for \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\"" Mar 17 17:40:48.941672 containerd[1595]: time="2025-03-17T17:40:48.939775093Z" level=info msg="Forcibly stopping sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\"" Mar 17 17:40:48.941672 containerd[1595]: time="2025-03-17T17:40:48.939846731Z" level=info msg="TearDown network for sandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" successfully" Mar 17 17:40:49.012450 kubelet[2894]: E0317 17:40:49.008035 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:49.012450 kubelet[2894]: E0317 17:40:49.008702 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:49.116844 containerd[1595]: time="2025-03-17T17:40:49.116534851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.116844 containerd[1595]: time="2025-03-17T17:40:49.116632807Z" level=info msg="RemovePodSandbox \"f89cfd950642aff10b965e79591c09ac4fdcd01831ada871cb26d2e8ea76c47f\" returns successfully" Mar 17 17:40:49.122497 containerd[1595]: time="2025-03-17T17:40:49.118048444Z" level=info msg="StopPodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\"" Mar 17 17:40:49.122497 containerd[1595]: time="2025-03-17T17:40:49.118186276Z" level=info msg="TearDown network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" successfully" Mar 17 17:40:49.122497 containerd[1595]: time="2025-03-17T17:40:49.118207537Z" level=info msg="StopPodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" returns successfully" Mar 17 17:40:49.122497 containerd[1595]: time="2025-03-17T17:40:49.118945903Z" level=info msg="RemovePodSandbox for \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\"" Mar 17 17:40:49.122497 containerd[1595]: time="2025-03-17T17:40:49.118987863Z" level=info msg="Forcibly stopping sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\"" Mar 17 17:40:49.122497 containerd[1595]: time="2025-03-17T17:40:49.119100147Z" level=info msg="TearDown network for sandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" successfully" Mar 17 17:40:49.135317 containerd[1595]: time="2025-03-17T17:40:49.132361768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.135317 containerd[1595]: time="2025-03-17T17:40:49.132444515Z" level=info msg="RemovePodSandbox \"d1cb43dde5233e158ca5f81018f9de97cf33fb91019056b6ca49d61b708d25d9\" returns successfully" Mar 17 17:40:49.135317 containerd[1595]: time="2025-03-17T17:40:49.133014050Z" level=info msg="StopPodSandbox for \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\"" Mar 17 17:40:49.135317 containerd[1595]: time="2025-03-17T17:40:49.133131374Z" level=info msg="TearDown network for sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\" successfully" Mar 17 17:40:49.135317 containerd[1595]: time="2025-03-17T17:40:49.133144699Z" level=info msg="StopPodSandbox for \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\" returns successfully" Mar 17 17:40:49.135317 containerd[1595]: time="2025-03-17T17:40:49.134483179Z" level=info msg="RemovePodSandbox for \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\"" Mar 17 17:40:49.135317 containerd[1595]: time="2025-03-17T17:40:49.134508247Z" level=info msg="Forcibly stopping sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\"" Mar 17 17:40:49.135317 containerd[1595]: time="2025-03-17T17:40:49.134624558Z" level=info msg="TearDown network for sandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\" successfully" Mar 17 17:40:49.168032 containerd[1595]: time="2025-03-17T17:40:49.163592426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.168032 containerd[1595]: time="2025-03-17T17:40:49.163694651Z" level=info msg="RemovePodSandbox \"2b5d6a23facb015513f2c43f3c63102f6d43652416d6f1140530390b70489c22\" returns successfully" Mar 17 17:40:49.168032 containerd[1595]: time="2025-03-17T17:40:49.164349599Z" level=info msg="StopPodSandbox for \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\"" Mar 17 17:40:49.168032 containerd[1595]: time="2025-03-17T17:40:49.164492961Z" level=info msg="TearDown network for sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\" successfully" Mar 17 17:40:49.168032 containerd[1595]: time="2025-03-17T17:40:49.164506146Z" level=info msg="StopPodSandbox for \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\" returns successfully" Mar 17 17:40:49.168032 containerd[1595]: time="2025-03-17T17:40:49.164840032Z" level=info msg="RemovePodSandbox for \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\"" Mar 17 17:40:49.168032 containerd[1595]: time="2025-03-17T17:40:49.164860251Z" level=info msg="Forcibly stopping sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\"" Mar 17 17:40:49.168032 containerd[1595]: time="2025-03-17T17:40:49.164934031Z" level=info msg="TearDown network for sandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\" successfully" Mar 17 17:40:49.190751 containerd[1595]: time="2025-03-17T17:40:49.190560284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.190751 containerd[1595]: time="2025-03-17T17:40:49.190631330Z" level=info msg="RemovePodSandbox \"4040578a1116952f521b77502ce508aa0d53ea74f89ae69cc9c0b36990e1d298\" returns successfully" Mar 17 17:40:49.197354 containerd[1595]: time="2025-03-17T17:40:49.196254871Z" level=info msg="StopPodSandbox for \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\"" Mar 17 17:40:49.197354 containerd[1595]: time="2025-03-17T17:40:49.196422811Z" level=info msg="TearDown network for sandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\" successfully" Mar 17 17:40:49.197354 containerd[1595]: time="2025-03-17T17:40:49.196436378Z" level=info msg="StopPodSandbox for \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\" returns successfully" Mar 17 17:40:49.197354 containerd[1595]: time="2025-03-17T17:40:49.197006303Z" level=info msg="RemovePodSandbox for \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\"" Mar 17 17:40:49.197354 containerd[1595]: time="2025-03-17T17:40:49.197028696Z" level=info msg="Forcibly stopping sandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\"" Mar 17 17:40:49.197354 containerd[1595]: time="2025-03-17T17:40:49.197113517Z" level=info msg="TearDown network for sandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\" successfully" Mar 17 17:40:49.233756 containerd[1595]: time="2025-03-17T17:40:49.232922296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.233756 containerd[1595]: time="2025-03-17T17:40:49.233010274Z" level=info msg="RemovePodSandbox \"77d66960410609d1e8214051a05893a3a45d9fe74839d2d76216065131f8e2e7\" returns successfully" Mar 17 17:40:49.243115 containerd[1595]: time="2025-03-17T17:40:49.240846861Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\"" Mar 17 17:40:49.243115 containerd[1595]: time="2025-03-17T17:40:49.241009009Z" level=info msg="TearDown network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" successfully" Mar 17 17:40:49.243115 containerd[1595]: time="2025-03-17T17:40:49.241023186Z" level=info msg="StopPodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" returns successfully" Mar 17 17:40:49.252251 containerd[1595]: time="2025-03-17T17:40:49.249125540Z" level=info msg="RemovePodSandbox for \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\"" Mar 17 17:40:49.252251 containerd[1595]: time="2025-03-17T17:40:49.249196545Z" level=info msg="Forcibly stopping sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\"" Mar 17 17:40:49.252251 containerd[1595]: time="2025-03-17T17:40:49.249345950Z" level=info msg="TearDown network for sandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" successfully" Mar 17 17:40:49.271416 containerd[1595]: time="2025-03-17T17:40:49.265715560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.271580 containerd[1595]: time="2025-03-17T17:40:49.271540355Z" level=info msg="RemovePodSandbox \"ac0efcfe96b6c12ef5ec0b7000810e0e2965d5634139771788530249931169bd\" returns successfully" Mar 17 17:40:49.273150 containerd[1595]: time="2025-03-17T17:40:49.272144576Z" level=info msg="StopPodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\"" Mar 17 17:40:49.273150 containerd[1595]: time="2025-03-17T17:40:49.272319459Z" level=info msg="TearDown network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" successfully" Mar 17 17:40:49.273150 containerd[1595]: time="2025-03-17T17:40:49.272376177Z" level=info msg="StopPodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" returns successfully" Mar 17 17:40:49.283287 containerd[1595]: time="2025-03-17T17:40:49.280456889Z" level=info msg="RemovePodSandbox for \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\"" Mar 17 17:40:49.283287 containerd[1595]: time="2025-03-17T17:40:49.280500734Z" level=info msg="Forcibly stopping sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\"" Mar 17 17:40:49.283287 containerd[1595]: time="2025-03-17T17:40:49.280607216Z" level=info msg="TearDown network for sandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" successfully" Mar 17 17:40:49.304380 containerd[1595]: time="2025-03-17T17:40:49.302435535Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.304380 containerd[1595]: time="2025-03-17T17:40:49.302573398Z" level=info msg="RemovePodSandbox \"34eb4f16d7996747e5b14e123a347d522aeb89d8a3955f6791e639343c24d3a9\" returns successfully" Mar 17 17:40:49.304380 containerd[1595]: time="2025-03-17T17:40:49.303972872Z" level=info msg="StopPodSandbox for \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\"" Mar 17 17:40:49.304380 containerd[1595]: time="2025-03-17T17:40:49.304120114Z" level=info msg="TearDown network for sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\" successfully" Mar 17 17:40:49.304380 containerd[1595]: time="2025-03-17T17:40:49.304132797Z" level=info msg="StopPodSandbox for \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\" returns successfully" Mar 17 17:40:49.306208 containerd[1595]: time="2025-03-17T17:40:49.306023199Z" level=info msg="RemovePodSandbox for \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\"" Mar 17 17:40:49.306208 containerd[1595]: time="2025-03-17T17:40:49.306062553Z" level=info msg="Forcibly stopping sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\"" Mar 17 17:40:49.306208 containerd[1595]: time="2025-03-17T17:40:49.306189455Z" level=info msg="TearDown network for sandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\" successfully" Mar 17 17:40:49.315751 containerd[1595]: time="2025-03-17T17:40:49.315619738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.335102 containerd[1595]: time="2025-03-17T17:40:49.335048455Z" level=info msg="RemovePodSandbox \"f7ef8277baee4989ca736c39bd862f683db77596fa62f499175b7f5729ba6845\" returns successfully" Mar 17 17:40:49.340282 containerd[1595]: time="2025-03-17T17:40:49.338679522Z" level=info msg="StopPodSandbox for \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\"" Mar 17 17:40:49.340282 containerd[1595]: time="2025-03-17T17:40:49.338827043Z" level=info msg="TearDown network for sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\" successfully" Mar 17 17:40:49.340282 containerd[1595]: time="2025-03-17T17:40:49.338842253Z" level=info msg="StopPodSandbox for \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\" returns successfully" Mar 17 17:40:49.341699 containerd[1595]: time="2025-03-17T17:40:49.341548416Z" level=info msg="RemovePodSandbox for \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\"" Mar 17 17:40:49.341699 containerd[1595]: time="2025-03-17T17:40:49.341573464Z" level=info msg="Forcibly stopping sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\"" Mar 17 17:40:49.341699 containerd[1595]: time="2025-03-17T17:40:49.341656754Z" level=info msg="TearDown network for sandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\" successfully" Mar 17 17:40:49.357160 containerd[1595]: time="2025-03-17T17:40:49.354314685Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.357160 containerd[1595]: time="2025-03-17T17:40:49.354388144Z" level=info msg="RemovePodSandbox \"26f56c40fb6c99dbd1eab86735c25ff740b8a6908e9f46bd4b34e8974fec6340\" returns successfully" Mar 17 17:40:49.357160 containerd[1595]: time="2025-03-17T17:40:49.354973889Z" level=info msg="StopPodSandbox for \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\"" Mar 17 17:40:49.357160 containerd[1595]: time="2025-03-17T17:40:49.355133033Z" level=info msg="TearDown network for sandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\" successfully" Mar 17 17:40:49.357160 containerd[1595]: time="2025-03-17T17:40:49.355146508Z" level=info msg="StopPodSandbox for \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\" returns successfully" Mar 17 17:40:49.357160 containerd[1595]: time="2025-03-17T17:40:49.355467330Z" level=info msg="RemovePodSandbox for \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\"" Mar 17 17:40:49.357160 containerd[1595]: time="2025-03-17T17:40:49.355489091Z" level=info msg="Forcibly stopping sandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\"" Mar 17 17:40:49.357160 containerd[1595]: time="2025-03-17T17:40:49.355585145Z" level=info msg="TearDown network for sandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\" successfully" Mar 17 17:40:49.368704 containerd[1595]: time="2025-03-17T17:40:49.367288757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.368704 containerd[1595]: time="2025-03-17T17:40:49.367354573Z" level=info msg="RemovePodSandbox \"942d161c9439f98b71a173bf29a2194d85fbeec7a0013e2dec8f2d8671baa6bb\" returns successfully" Mar 17 17:40:49.368704 containerd[1595]: time="2025-03-17T17:40:49.367867700Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\"" Mar 17 17:40:49.368704 containerd[1595]: time="2025-03-17T17:40:49.368012306Z" level=info msg="TearDown network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" successfully" Mar 17 17:40:49.368704 containerd[1595]: time="2025-03-17T17:40:49.368025390Z" level=info msg="StopPodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" returns successfully" Mar 17 17:40:49.368704 containerd[1595]: time="2025-03-17T17:40:49.368354848Z" level=info msg="RemovePodSandbox for \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\"" Mar 17 17:40:49.368704 containerd[1595]: time="2025-03-17T17:40:49.368376298Z" level=info msg="Forcibly stopping sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\"" Mar 17 17:40:49.368704 containerd[1595]: time="2025-03-17T17:40:49.368547034Z" level=info msg="TearDown network for sandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" successfully" Mar 17 17:40:49.381905 containerd[1595]: time="2025-03-17T17:40:49.381739363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.381905 containerd[1595]: time="2025-03-17T17:40:49.381810068Z" level=info msg="RemovePodSandbox \"365256915cb838ef59c5441635355f0f8c542b492296dba2257adbd1208d3145\" returns successfully" Mar 17 17:40:49.383403 containerd[1595]: time="2025-03-17T17:40:49.382481437Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\"" Mar 17 17:40:49.383403 containerd[1595]: time="2025-03-17T17:40:49.382587388Z" level=info msg="TearDown network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" successfully" Mar 17 17:40:49.383403 containerd[1595]: time="2025-03-17T17:40:49.382599381Z" level=info msg="StopPodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" returns successfully" Mar 17 17:40:49.383403 containerd[1595]: time="2025-03-17T17:40:49.382987450Z" level=info msg="RemovePodSandbox for \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\"" Mar 17 17:40:49.383403 containerd[1595]: time="2025-03-17T17:40:49.383010413Z" level=info msg="Forcibly stopping sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\"" Mar 17 17:40:49.383403 containerd[1595]: time="2025-03-17T17:40:49.383141643Z" level=info msg="TearDown network for sandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" successfully" Mar 17 17:40:49.389881 containerd[1595]: time="2025-03-17T17:40:49.389679037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.389881 containerd[1595]: time="2025-03-17T17:40:49.389769128Z" level=info msg="RemovePodSandbox \"20eeb52a08d0b508aef33de21dde17e0cd81210aa71fa73dbc2ac4461c919633\" returns successfully" Mar 17 17:40:49.407265 containerd[1595]: time="2025-03-17T17:40:49.405948987Z" level=info msg="StopPodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\"" Mar 17 17:40:49.407265 containerd[1595]: time="2025-03-17T17:40:49.406164839Z" level=info msg="TearDown network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" successfully" Mar 17 17:40:49.407265 containerd[1595]: time="2025-03-17T17:40:49.406201779Z" level=info msg="StopPodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" returns successfully" Mar 17 17:40:49.407265 containerd[1595]: time="2025-03-17T17:40:49.406713654Z" level=info msg="RemovePodSandbox for \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\"" Mar 17 17:40:49.407265 containerd[1595]: time="2025-03-17T17:40:49.406761505Z" level=info msg="Forcibly stopping sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\"" Mar 17 17:40:49.407265 containerd[1595]: time="2025-03-17T17:40:49.406874891Z" level=info msg="TearDown network for sandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" successfully" Mar 17 17:40:49.470831 containerd[1595]: time="2025-03-17T17:40:49.469492435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.470831 containerd[1595]: time="2025-03-17T17:40:49.469580983Z" level=info msg="RemovePodSandbox \"e8e267785d373d30ff890d65856ca59b4c0a3425918915226c4a5960a98dee38\" returns successfully" Mar 17 17:40:49.476718 containerd[1595]: time="2025-03-17T17:40:49.475996043Z" level=info msg="StopPodSandbox for \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\"" Mar 17 17:40:49.476718 containerd[1595]: time="2025-03-17T17:40:49.476145397Z" level=info msg="TearDown network for sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\" successfully" Mar 17 17:40:49.476718 containerd[1595]: time="2025-03-17T17:40:49.476158082Z" level=info msg="StopPodSandbox for \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\" returns successfully" Mar 17 17:40:49.476718 containerd[1595]: time="2025-03-17T17:40:49.476611976Z" level=info msg="RemovePodSandbox for \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\"" Mar 17 17:40:49.476718 containerd[1595]: time="2025-03-17T17:40:49.476633247Z" level=info msg="Forcibly stopping sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\"" Mar 17 17:40:49.482034 containerd[1595]: time="2025-03-17T17:40:49.477137808Z" level=info msg="TearDown network for sandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\" successfully" Mar 17 17:40:49.536960 containerd[1595]: time="2025-03-17T17:40:49.536844307Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.537146 containerd[1595]: time="2025-03-17T17:40:49.537035612Z" level=info msg="RemovePodSandbox \"1adf26c22b4b5d1793570092cabedf8622ab2953053e532861e9cd3cabf9e781\" returns successfully" Mar 17 17:40:49.562426 containerd[1595]: time="2025-03-17T17:40:49.558961606Z" level=info msg="StopPodSandbox for \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\"" Mar 17 17:40:49.562426 containerd[1595]: time="2025-03-17T17:40:49.559108066Z" level=info msg="TearDown network for sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\" successfully" Mar 17 17:40:49.562426 containerd[1595]: time="2025-03-17T17:40:49.559120058Z" level=info msg="StopPodSandbox for \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\" returns successfully" Mar 17 17:40:49.565525 containerd[1595]: time="2025-03-17T17:40:49.563526872Z" level=info msg="RemovePodSandbox for \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\"" Mar 17 17:40:49.565525 containerd[1595]: time="2025-03-17T17:40:49.563556979Z" level=info msg="Forcibly stopping sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\"" Mar 17 17:40:49.565525 containerd[1595]: time="2025-03-17T17:40:49.563657190Z" level=info msg="TearDown network for sandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\" successfully" Mar 17 17:40:49.574130 containerd[1595]: time="2025-03-17T17:40:49.573432811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.574130 containerd[1595]: time="2025-03-17T17:40:49.573513765Z" level=info msg="RemovePodSandbox \"c9b0c4fa8919e11cff7d3923325c13c62aa514fce34d2bf42744703293fb404a\" returns successfully" Mar 17 17:40:49.578257 containerd[1595]: time="2025-03-17T17:40:49.576365678Z" level=info msg="StopPodSandbox for \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\"" Mar 17 17:40:49.578257 containerd[1595]: time="2025-03-17T17:40:49.576484003Z" level=info msg="TearDown network for sandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\" successfully" Mar 17 17:40:49.578257 containerd[1595]: time="2025-03-17T17:40:49.576496106Z" level=info msg="StopPodSandbox for \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\" returns successfully" Mar 17 17:40:49.578257 containerd[1595]: time="2025-03-17T17:40:49.577024242Z" level=info msg="RemovePodSandbox for \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\"" Mar 17 17:40:49.578257 containerd[1595]: time="2025-03-17T17:40:49.577050662Z" level=info msg="Forcibly stopping sandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\"" Mar 17 17:40:49.578257 containerd[1595]: time="2025-03-17T17:40:49.577137398Z" level=info msg="TearDown network for sandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\" successfully" Mar 17 17:40:49.596939 containerd[1595]: time="2025-03-17T17:40:49.593487982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.596939 containerd[1595]: time="2025-03-17T17:40:49.593568345Z" level=info msg="RemovePodSandbox \"56930828e6675ed53284ec08cf68dc2df899cbb21d4206af3c51cb94bd3641a3\" returns successfully" Mar 17 17:40:49.596939 containerd[1595]: time="2025-03-17T17:40:49.594050283Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\"" Mar 17 17:40:49.596939 containerd[1595]: time="2025-03-17T17:40:49.594181744Z" level=info msg="TearDown network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" successfully" Mar 17 17:40:49.596939 containerd[1595]: time="2025-03-17T17:40:49.594195740Z" level=info msg="StopPodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" returns successfully" Mar 17 17:40:49.596939 containerd[1595]: time="2025-03-17T17:40:49.594409607Z" level=info msg="RemovePodSandbox for \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\"" Mar 17 17:40:49.596939 containerd[1595]: time="2025-03-17T17:40:49.594430567Z" level=info msg="Forcibly stopping sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\"" Mar 17 17:40:49.596939 containerd[1595]: time="2025-03-17T17:40:49.594508275Z" level=info msg="TearDown network for sandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" successfully" Mar 17 17:40:49.626314 containerd[1595]: time="2025-03-17T17:40:49.623870625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.626314 containerd[1595]: time="2025-03-17T17:40:49.623962660Z" level=info msg="RemovePodSandbox \"6d5908974dd55c896be2ce2e8fc7161568647ff858d2e39ef99109ac7e6c038f\" returns successfully" Mar 17 17:40:49.626314 containerd[1595]: time="2025-03-17T17:40:49.624926196Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\"" Mar 17 17:40:49.626314 containerd[1595]: time="2025-03-17T17:40:49.625086430Z" level=info msg="TearDown network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" successfully" Mar 17 17:40:49.626314 containerd[1595]: time="2025-03-17T17:40:49.625098484Z" level=info msg="StopPodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" returns successfully" Mar 17 17:40:49.626314 containerd[1595]: time="2025-03-17T17:40:49.625437650Z" level=info msg="RemovePodSandbox for \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\"" Mar 17 17:40:49.626314 containerd[1595]: time="2025-03-17T17:40:49.625458159Z" level=info msg="Forcibly stopping sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\"" Mar 17 17:40:49.626314 containerd[1595]: time="2025-03-17T17:40:49.625528904Z" level=info msg="TearDown network for sandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" successfully" Mar 17 17:40:49.648940 containerd[1595]: time="2025-03-17T17:40:49.645554308Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.648940 containerd[1595]: time="2025-03-17T17:40:49.645630974Z" level=info msg="RemovePodSandbox \"42b421cddd2705783b89c22afae3fb49cf60cb0687bb90d17c77fdd923f81a26\" returns successfully" Mar 17 17:40:49.648940 containerd[1595]: time="2025-03-17T17:40:49.646160042Z" level=info msg="StopPodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\"" Mar 17 17:40:49.648940 containerd[1595]: time="2025-03-17T17:40:49.646311390Z" level=info msg="TearDown network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" successfully" Mar 17 17:40:49.648940 containerd[1595]: time="2025-03-17T17:40:49.646358671Z" level=info msg="StopPodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" returns successfully" Mar 17 17:40:49.648940 containerd[1595]: time="2025-03-17T17:40:49.646752992Z" level=info msg="RemovePodSandbox for \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\"" Mar 17 17:40:49.648940 containerd[1595]: time="2025-03-17T17:40:49.646826201Z" level=info msg="Forcibly stopping sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\"" Mar 17 17:40:49.648940 containerd[1595]: time="2025-03-17T17:40:49.646954546Z" level=info msg="TearDown network for sandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" successfully" Mar 17 17:40:49.671626 containerd[1595]: time="2025-03-17T17:40:49.669372427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.671626 containerd[1595]: time="2025-03-17T17:40:49.669467348Z" level=info msg="RemovePodSandbox \"4781c562b9eeba235aaf21f9b2b6e59972144759785c73a0b5142cf2ae289716\" returns successfully" Mar 17 17:40:49.671626 containerd[1595]: time="2025-03-17T17:40:49.670149247Z" level=info msg="StopPodSandbox for \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\"" Mar 17 17:40:49.671626 containerd[1595]: time="2025-03-17T17:40:49.670359667Z" level=info msg="TearDown network for sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\" successfully" Mar 17 17:40:49.671626 containerd[1595]: time="2025-03-17T17:40:49.670372973Z" level=info msg="StopPodSandbox for \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\" returns successfully" Mar 17 17:40:49.671626 containerd[1595]: time="2025-03-17T17:40:49.670836707Z" level=info msg="RemovePodSandbox for \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\"" Mar 17 17:40:49.671626 containerd[1595]: time="2025-03-17T17:40:49.670864889Z" level=info msg="Forcibly stopping sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\"" Mar 17 17:40:49.671626 containerd[1595]: time="2025-03-17T17:40:49.670954230Z" level=info msg="TearDown network for sandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\" successfully" Mar 17 17:40:49.691930 containerd[1595]: time="2025-03-17T17:40:49.691733981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.691930 containerd[1595]: time="2025-03-17T17:40:49.691812000Z" level=info msg="RemovePodSandbox \"3ae5e7e1a2edfff6d4b0c4d0359bb49c7476f5217a6d73d532f2042191956211\" returns successfully" Mar 17 17:40:49.692834 containerd[1595]: time="2025-03-17T17:40:49.692664624Z" level=info msg="StopPodSandbox for \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\"" Mar 17 17:40:49.692834 containerd[1595]: time="2025-03-17T17:40:49.692776206Z" level=info msg="TearDown network for sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\" successfully" Mar 17 17:40:49.692834 containerd[1595]: time="2025-03-17T17:40:49.692787708Z" level=info msg="StopPodSandbox for \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\" returns successfully" Mar 17 17:40:49.697489 containerd[1595]: time="2025-03-17T17:40:49.693148326Z" level=info msg="RemovePodSandbox for \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\"" Mar 17 17:40:49.697489 containerd[1595]: time="2025-03-17T17:40:49.693178263Z" level=info msg="Forcibly stopping sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\"" Mar 17 17:40:49.697489 containerd[1595]: time="2025-03-17T17:40:49.693268664Z" level=info msg="TearDown network for sandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\" successfully" Mar 17 17:40:49.715498 containerd[1595]: time="2025-03-17T17:40:49.707776468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.715498 containerd[1595]: time="2025-03-17T17:40:49.707876239Z" level=info msg="RemovePodSandbox \"ed6b24f70fa60d242a22655e9a449dc4947159be0ee404338a20364a878f692f\" returns successfully" Mar 17 17:40:49.715498 containerd[1595]: time="2025-03-17T17:40:49.710849733Z" level=info msg="StopPodSandbox for \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\"" Mar 17 17:40:49.715498 containerd[1595]: time="2025-03-17T17:40:49.711090671Z" level=info msg="TearDown network for sandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\" successfully" Mar 17 17:40:49.715498 containerd[1595]: time="2025-03-17T17:40:49.711103897Z" level=info msg="StopPodSandbox for \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\" returns successfully" Mar 17 17:40:49.715498 containerd[1595]: time="2025-03-17T17:40:49.711585805Z" level=info msg="RemovePodSandbox for \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\"" Mar 17 17:40:49.715498 containerd[1595]: time="2025-03-17T17:40:49.711611143Z" level=info msg="Forcibly stopping sandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\"" Mar 17 17:40:49.715498 containerd[1595]: time="2025-03-17T17:40:49.711684884Z" level=info msg="TearDown network for sandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\" successfully" Mar 17 17:40:49.744005 containerd[1595]: time="2025-03-17T17:40:49.743742648Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.744005 containerd[1595]: time="2025-03-17T17:40:49.743835384Z" level=info msg="RemovePodSandbox \"dd437b0769e397a19a60fedddb881b8b5945dd00b8468215af667396abe8c99b\" returns successfully" Mar 17 17:40:49.748236 containerd[1595]: time="2025-03-17T17:40:49.745565179Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\"" Mar 17 17:40:49.748236 containerd[1595]: time="2025-03-17T17:40:49.745729663Z" level=info msg="TearDown network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" successfully" Mar 17 17:40:49.748236 containerd[1595]: time="2025-03-17T17:40:49.745743709Z" level=info msg="StopPodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" returns successfully" Mar 17 17:40:49.748236 containerd[1595]: time="2025-03-17T17:40:49.746109846Z" level=info msg="RemovePodSandbox for \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\"" Mar 17 17:40:49.748236 containerd[1595]: time="2025-03-17T17:40:49.746131859Z" level=info msg="Forcibly stopping sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\"" Mar 17 17:40:49.748236 containerd[1595]: time="2025-03-17T17:40:49.746248520Z" level=info msg="TearDown network for sandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" successfully" Mar 17 17:40:49.776197 containerd[1595]: time="2025-03-17T17:40:49.774005963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.776197 containerd[1595]: time="2025-03-17T17:40:49.774083631Z" level=info msg="RemovePodSandbox \"f5c43aa1edb80e8f71c4f32c2f94d14b61bb5ad2e565373b70dd7c592880ce33\" returns successfully" Mar 17 17:40:49.791280 containerd[1595]: time="2025-03-17T17:40:49.791234369Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\"" Mar 17 17:40:49.792526 containerd[1595]: time="2025-03-17T17:40:49.792100610Z" level=info msg="TearDown network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" successfully" Mar 17 17:40:49.792526 containerd[1595]: time="2025-03-17T17:40:49.792119245Z" level=info msg="StopPodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" returns successfully" Mar 17 17:40:49.796664 containerd[1595]: time="2025-03-17T17:40:49.793964971Z" level=info msg="RemovePodSandbox for \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\"" Mar 17 17:40:49.796664 containerd[1595]: time="2025-03-17T17:40:49.794011620Z" level=info msg="Forcibly stopping sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\"" Mar 17 17:40:49.796664 containerd[1595]: time="2025-03-17T17:40:49.794111510Z" level=info msg="TearDown network for sandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" successfully" Mar 17 17:40:49.846967 containerd[1595]: time="2025-03-17T17:40:49.843814644Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.846967 containerd[1595]: time="2025-03-17T17:40:49.843996801Z" level=info msg="RemovePodSandbox \"b9d430bac5a5a7a8efd8a37750b38ef6c82b3d3bf17afcb8593cde6668d95c1c\" returns successfully" Mar 17 17:40:49.846967 containerd[1595]: time="2025-03-17T17:40:49.844675785Z" level=info msg="StopPodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\"" Mar 17 17:40:49.846967 containerd[1595]: time="2025-03-17T17:40:49.844797687Z" level=info msg="TearDown network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" successfully" Mar 17 17:40:49.846967 containerd[1595]: time="2025-03-17T17:40:49.844810351Z" level=info msg="StopPodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" returns successfully" Mar 17 17:40:49.846967 containerd[1595]: time="2025-03-17T17:40:49.845019259Z" level=info msg="RemovePodSandbox for \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\"" Mar 17 17:40:49.846967 containerd[1595]: time="2025-03-17T17:40:49.845040278Z" level=info msg="Forcibly stopping sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\"" Mar 17 17:40:49.846967 containerd[1595]: time="2025-03-17T17:40:49.845121924Z" level=info msg="TearDown network for sandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" successfully" Mar 17 17:40:49.868734 containerd[1595]: time="2025-03-17T17:40:49.864781993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.868734 containerd[1595]: time="2025-03-17T17:40:49.864857086Z" level=info msg="RemovePodSandbox \"49134f7de4eb41c8fca39f464b46178e5f924d3d67df790cc4ff6aac22e62c20\" returns successfully" Mar 17 17:40:49.868734 containerd[1595]: time="2025-03-17T17:40:49.865473449Z" level=info msg="StopPodSandbox for \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\"" Mar 17 17:40:49.868734 containerd[1595]: time="2025-03-17T17:40:49.865599931Z" level=info msg="TearDown network for sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\" successfully" Mar 17 17:40:49.868734 containerd[1595]: time="2025-03-17T17:40:49.865614329Z" level=info msg="StopPodSandbox for \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\" returns successfully" Mar 17 17:40:49.868734 containerd[1595]: time="2025-03-17T17:40:49.865898259Z" level=info msg="RemovePodSandbox for \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\"" Mar 17 17:40:49.868734 containerd[1595]: time="2025-03-17T17:40:49.865917346Z" level=info msg="Forcibly stopping sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\"" Mar 17 17:40:49.868734 containerd[1595]: time="2025-03-17T17:40:49.865986067Z" level=info msg="TearDown network for sandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\" successfully" Mar 17 17:40:49.934210 containerd[1595]: time="2025-03-17T17:40:49.934045787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:49.934484 containerd[1595]: time="2025-03-17T17:40:49.934269243Z" level=info msg="RemovePodSandbox \"8f5cb31cabf8e8d27bccef253b449fa86feb2ca1ec05565c3fcf2543ccde53fe\" returns successfully" Mar 17 17:40:49.936291 containerd[1595]: time="2025-03-17T17:40:49.935666273Z" level=info msg="StopPodSandbox for \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\"" Mar 17 17:40:49.936291 containerd[1595]: time="2025-03-17T17:40:49.935796070Z" level=info msg="TearDown network for sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\" successfully" Mar 17 17:40:49.936291 containerd[1595]: time="2025-03-17T17:40:49.935809727Z" level=info msg="StopPodSandbox for \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\" returns successfully" Mar 17 17:40:49.937428 containerd[1595]: time="2025-03-17T17:40:49.937373405Z" level=info msg="RemovePodSandbox for \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\"" Mar 17 17:40:49.937428 containerd[1595]: time="2025-03-17T17:40:49.937411046Z" level=info msg="Forcibly stopping sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\"" Mar 17 17:40:49.937571 containerd[1595]: time="2025-03-17T17:40:49.937509504Z" level=info msg="TearDown network for sandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\" successfully" Mar 17 17:40:50.012705 containerd[1595]: time="2025-03-17T17:40:50.010324055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:50.012705 containerd[1595]: time="2025-03-17T17:40:50.010435688Z" level=info msg="RemovePodSandbox \"2e1e7d1843a36206ce7f9b169d9c04eae943191c049cef7fc5546197a8f6354f\" returns successfully" Mar 17 17:40:50.015214 containerd[1595]: time="2025-03-17T17:40:50.015157138Z" level=info msg="StopPodSandbox for \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\"" Mar 17 17:40:50.016408 containerd[1595]: time="2025-03-17T17:40:50.015341368Z" level=info msg="TearDown network for sandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\" successfully" Mar 17 17:40:50.016408 containerd[1595]: time="2025-03-17T17:40:50.015357580Z" level=info msg="StopPodSandbox for \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\" returns successfully" Mar 17 17:40:50.016408 containerd[1595]: time="2025-03-17T17:40:50.015890384Z" level=info msg="RemovePodSandbox for \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\"" Mar 17 17:40:50.016408 containerd[1595]: time="2025-03-17T17:40:50.015911284Z" level=info msg="Forcibly stopping sandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\"" Mar 17 17:40:50.016408 containerd[1595]: time="2025-03-17T17:40:50.015988210Z" level=info msg="TearDown network for sandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\" successfully" Mar 17 17:40:50.039259 kubelet[2894]: E0317 17:40:50.032281 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:50.039259 kubelet[2894]: E0317 17:40:50.036549 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:50.130638 containerd[1595]: time="2025-03-17T17:40:50.130359334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:50.130638 containerd[1595]: time="2025-03-17T17:40:50.130454525Z" level=info msg="RemovePodSandbox \"c208dc601ce7d74e812e5376fe213594190f45f56535f1466a66762c56ac3bb5\" returns successfully" Mar 17 17:40:50.132055 containerd[1595]: time="2025-03-17T17:40:50.131619874Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\"" Mar 17 17:40:50.132055 containerd[1595]: time="2025-03-17T17:40:50.131749451Z" level=info msg="TearDown network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" successfully" Mar 17 17:40:50.132055 containerd[1595]: time="2025-03-17T17:40:50.131762856Z" level=info msg="StopPodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" returns successfully" Mar 17 17:40:50.132981 containerd[1595]: time="2025-03-17T17:40:50.132739396Z" level=info msg="RemovePodSandbox for \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\"" Mar 17 17:40:50.132981 containerd[1595]: time="2025-03-17T17:40:50.132772920Z" level=info msg="Forcibly stopping sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\"" Mar 17 17:40:50.132981 containerd[1595]: time="2025-03-17T17:40:50.132873281Z" level=info msg="TearDown network for sandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" successfully" Mar 17 17:40:51.737494 systemd[1]: Started sshd@14-10.0.0.27:22-10.0.0.1:46088.service - OpenSSH per-connection server daemon (10.0.0.1:46088). Mar 17 17:40:51.830006 sshd[5992]: Accepted publickey for core from 10.0.0.1 port 46088 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:51.831559 sshd-session[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:51.835707 systemd-logind[1578]: New session 15 of user core. Mar 17 17:40:51.841614 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:40:52.012814 sshd[5997]: Connection closed by 10.0.0.1 port 46088 Mar 17 17:40:52.014370 sshd-session[5992]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:52.017965 systemd[1]: sshd@14-10.0.0.27:22-10.0.0.1:46088.service: Deactivated successfully. Mar 17 17:40:52.020725 systemd-logind[1578]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:40:52.020793 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:40:52.022139 systemd-logind[1578]: Removed session 15. Mar 17 17:40:53.788702 containerd[1595]: time="2025-03-17T17:40:53.788427124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:53.788702 containerd[1595]: time="2025-03-17T17:40:53.788545579Z" level=info msg="RemovePodSandbox \"e4b287cc3440b029a61eafe60d6154dd7ee593208c644b16f34a54fd4da37eac\" returns successfully" Mar 17 17:40:53.789644 containerd[1595]: time="2025-03-17T17:40:53.789391307Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\"" Mar 17 17:40:53.789644 containerd[1595]: time="2025-03-17T17:40:53.789614061Z" level=info msg="TearDown network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" successfully" Mar 17 17:40:53.789644 containerd[1595]: time="2025-03-17T17:40:53.789630162Z" level=info msg="StopPodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" returns successfully" Mar 17 17:40:53.790139 containerd[1595]: time="2025-03-17T17:40:53.790093743Z" level=info msg="RemovePodSandbox for \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\"" Mar 17 17:40:53.790241 containerd[1595]: time="2025-03-17T17:40:53.790138920Z" level=info msg="Forcibly stopping sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\"" Mar 17 17:40:53.790393 containerd[1595]: time="2025-03-17T17:40:53.790289527Z" level=info msg="TearDown network for sandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" successfully" Mar 17 17:40:53.890384 containerd[1595]: time="2025-03-17T17:40:53.890336768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:53.890557 containerd[1595]: time="2025-03-17T17:40:53.890407332Z" level=info msg="RemovePodSandbox \"9ffaa7346b30a0a0fdab8bbda65eb3e04adb1e85c75bb216bbff56520558abc7\" returns successfully" Mar 17 17:40:53.890796 containerd[1595]: time="2025-03-17T17:40:53.890763760Z" level=info msg="StopPodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\"" Mar 17 17:40:53.890950 containerd[1595]: time="2025-03-17T17:40:53.890888007Z" level=info msg="TearDown network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" successfully" Mar 17 17:40:53.890950 containerd[1595]: time="2025-03-17T17:40:53.890899328Z" level=info msg="StopPodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" returns successfully" Mar 17 17:40:53.892949 containerd[1595]: time="2025-03-17T17:40:53.891316451Z" level=info msg="RemovePodSandbox for \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\"" Mar 17 17:40:53.892949 containerd[1595]: time="2025-03-17T17:40:53.891354594Z" level=info msg="Forcibly stopping sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\"" Mar 17 17:40:53.892949 containerd[1595]: time="2025-03-17T17:40:53.891448182Z" level=info msg="TearDown network for sandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" successfully" Mar 17 17:40:53.919090 containerd[1595]: time="2025-03-17T17:40:53.919044884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:53.950215 containerd[1595]: time="2025-03-17T17:40:53.950150431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 17 17:40:54.012603 containerd[1595]: time="2025-03-17T17:40:54.012522973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:54.016076 containerd[1595]: time="2025-03-17T17:40:54.012630147Z" level=info msg="RemovePodSandbox \"a5886a93d22cec47bea8d8fe870e1a8aedd1311baa8a46f1f6d33a721fd6d1f0\" returns successfully" Mar 17 17:40:54.016076 containerd[1595]: time="2025-03-17T17:40:54.013146810Z" level=info msg="StopPodSandbox for \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\"" Mar 17 17:40:54.016076 containerd[1595]: time="2025-03-17T17:40:54.013305812Z" level=info msg="TearDown network for sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\" successfully" Mar 17 17:40:54.016076 containerd[1595]: time="2025-03-17T17:40:54.013316802Z" level=info msg="StopPodSandbox for \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\" returns successfully" Mar 17 17:40:54.016076 containerd[1595]: time="2025-03-17T17:40:54.013714479Z" level=info msg="RemovePodSandbox for \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\"" Mar 17 17:40:54.016076 containerd[1595]: time="2025-03-17T17:40:54.013733485Z" level=info msg="Forcibly stopping sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\"" Mar 17 17:40:54.016076 containerd[1595]: time="2025-03-17T17:40:54.013808096Z" level=info msg="TearDown network for sandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\" successfully" Mar 17 17:40:54.106161 containerd[1595]: time="2025-03-17T17:40:54.105996203Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:54.202398 containerd[1595]: time="2025-03-17T17:40:54.202342425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:54.202398 containerd[1595]: time="2025-03-17T17:40:54.202412628Z" level=info msg="RemovePodSandbox \"4270495bee62ae326cae8538cb4638b63b8a315b467917aefb8a5faa220863b6\" returns successfully" Mar 17 17:40:54.203137 containerd[1595]: time="2025-03-17T17:40:54.202943879Z" level=info msg="StopPodSandbox for \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\"" Mar 17 17:40:54.203137 containerd[1595]: time="2025-03-17T17:40:54.203061172Z" level=info msg="TearDown network for sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\" successfully" Mar 17 17:40:54.203137 containerd[1595]: time="2025-03-17T17:40:54.203071541Z" level=info msg="StopPodSandbox for \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\" returns successfully" Mar 17 17:40:54.204246 containerd[1595]: time="2025-03-17T17:40:54.203342556Z" level=info msg="RemovePodSandbox for \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\"" Mar 17 17:40:54.204246 containerd[1595]: time="2025-03-17T17:40:54.203368145Z" level=info msg="Forcibly stopping sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\"" Mar 17 17:40:54.204246 containerd[1595]: time="2025-03-17T17:40:54.203437507Z" level=info msg="TearDown network for sandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\" successfully" Mar 17 17:40:54.235094 containerd[1595]: time="2025-03-17T17:40:54.235041803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:54.235753 containerd[1595]: time="2025-03-17T17:40:54.235710134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 8.899223384s" Mar 17 17:40:54.235830 containerd[1595]: time="2025-03-17T17:40:54.235760028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 17 17:40:54.236825 containerd[1595]: time="2025-03-17T17:40:54.236725685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:40:54.244258 containerd[1595]: time="2025-03-17T17:40:54.244207000Z" level=info msg="CreateContainer within sandbox \"40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 17:40:54.358732 containerd[1595]: time="2025-03-17T17:40:54.358452187Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:54.358732 containerd[1595]: time="2025-03-17T17:40:54.358538901Z" level=info msg="RemovePodSandbox \"71a5b18ed93f00fba67ee73d5c0f6d663c5df57ae9c7398b2f1ad57743e4dfa4\" returns successfully" Mar 17 17:40:54.359109 containerd[1595]: time="2025-03-17T17:40:54.359064471Z" level=info msg="StopPodSandbox for \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\"" Mar 17 17:40:54.359286 containerd[1595]: time="2025-03-17T17:40:54.359262779Z" level=info msg="TearDown network for sandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\" successfully" Mar 17 17:40:54.359327 containerd[1595]: time="2025-03-17T17:40:54.359283748Z" level=info msg="StopPodSandbox for \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\" returns successfully" Mar 17 17:40:54.359728 containerd[1595]: time="2025-03-17T17:40:54.359704959Z" level=info msg="RemovePodSandbox for \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\"" Mar 17 17:40:54.359771 containerd[1595]: time="2025-03-17T17:40:54.359736509Z" level=info msg="Forcibly stopping sandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\"" Mar 17 17:40:54.359867 containerd[1595]: time="2025-03-17T17:40:54.359824146Z" level=info msg="TearDown network for sandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\" successfully" Mar 17 17:40:54.483866 containerd[1595]: time="2025-03-17T17:40:54.483771211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:40:54.483866 containerd[1595]: time="2025-03-17T17:40:54.483867404Z" level=info msg="RemovePodSandbox \"0d22527a818dc6181b158241dfda9203d1147c881adfa3d8b54bbdff5474367f\" returns successfully" Mar 17 17:40:54.865682 containerd[1595]: time="2025-03-17T17:40:54.865618163Z" level=info msg="CreateContainer within sandbox \"40d173274e176e401f77deeef130ee1defcb00c26c36117a114187651af3ba2c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0bb9b6bc7d06e7a95b2b4af4180216f07769c9f59cbd7a520f71a873a398c18d\"" Mar 17 17:40:54.866362 containerd[1595]: time="2025-03-17T17:40:54.866327290Z" level=info msg="StartContainer for \"0bb9b6bc7d06e7a95b2b4af4180216f07769c9f59cbd7a520f71a873a398c18d\"" Mar 17 17:40:55.674629 containerd[1595]: time="2025-03-17T17:40:55.674575599Z" level=info msg="StartContainer for \"0bb9b6bc7d06e7a95b2b4af4180216f07769c9f59cbd7a520f71a873a398c18d\" returns successfully" Mar 17 17:40:56.447873 kubelet[2894]: I0317 17:40:56.447805 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b6b58f89d-g52xg" podStartSLOduration=39.547192872 podStartE2EDuration="48.447785603s" podCreationTimestamp="2025-03-17 17:40:08 +0000 UTC" firstStartedPulling="2025-03-17 17:40:45.335965946 +0000 UTC m=+56.630629805" lastFinishedPulling="2025-03-17 17:40:54.236558677 +0000 UTC m=+65.531222536" observedRunningTime="2025-03-17 17:40:56.160813084 +0000 UTC m=+67.455476943" watchObservedRunningTime="2025-03-17 17:40:56.447785603 +0000 UTC m=+67.742449452" Mar 17 17:40:57.026661 systemd[1]: Started sshd@15-10.0.0.27:22-10.0.0.1:44588.service - OpenSSH per-connection server daemon (10.0.0.1:44588). Mar 17 17:40:57.068073 sshd[6076]: Accepted publickey for core from 10.0.0.1 port 44588 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:57.070097 sshd-session[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:57.074888 systemd-logind[1578]: New session 16 of user core. Mar 17 17:40:57.079801 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:40:57.219186 sshd[6079]: Connection closed by 10.0.0.1 port 44588 Mar 17 17:40:57.219548 sshd-session[6076]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:57.223287 systemd[1]: sshd@15-10.0.0.27:22-10.0.0.1:44588.service: Deactivated successfully. Mar 17 17:40:57.225717 systemd-logind[1578]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:40:57.225798 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:40:57.226802 systemd-logind[1578]: Removed session 16. Mar 17 17:40:59.610413 containerd[1595]: time="2025-03-17T17:40:59.610344136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:59.615486 containerd[1595]: time="2025-03-17T17:40:59.615369968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 17 17:40:59.622710 containerd[1595]: time="2025-03-17T17:40:59.622637197Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:59.626396 containerd[1595]: time="2025-03-17T17:40:59.626332904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:59.627803 containerd[1595]: time="2025-03-17T17:40:59.627411982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 5.390659387s" Mar 17 17:40:59.627803 containerd[1595]: time="2025-03-17T17:40:59.627447790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 17 17:40:59.630122 containerd[1595]: time="2025-03-17T17:40:59.630075760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:40:59.631300 containerd[1595]: time="2025-03-17T17:40:59.631263375Z" level=info msg="CreateContainer within sandbox \"597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:40:59.651795 containerd[1595]: time="2025-03-17T17:40:59.651737999Z" level=info msg="CreateContainer within sandbox \"597d5b6568018ea07c313e3802edef1421c3974c973861e0db8f2a0811d321c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0bbdc6bf257e4a1793b40765238dfea3282dde9c0a92c0a23b1c06f7e8305d69\"" Mar 17 17:40:59.653316 containerd[1595]: time="2025-03-17T17:40:59.652760872Z" level=info msg="StartContainer for \"0bbdc6bf257e4a1793b40765238dfea3282dde9c0a92c0a23b1c06f7e8305d69\"" Mar 17 17:40:59.739524 containerd[1595]: time="2025-03-17T17:40:59.739468031Z" level=info msg="StartContainer for \"0bbdc6bf257e4a1793b40765238dfea3282dde9c0a92c0a23b1c06f7e8305d69\" returns successfully" Mar 17 17:41:00.438991 containerd[1595]: time="2025-03-17T17:41:00.438917417Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:00.544857 containerd[1595]: time="2025-03-17T17:41:00.544733574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 17 17:41:00.548990 containerd[1595]: time="2025-03-17T17:41:00.548928816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 918.797602ms" Mar 17 17:41:00.548990 containerd[1595]: time="2025-03-17T17:41:00.548974774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 17 17:41:00.550433 containerd[1595]: time="2025-03-17T17:41:00.550085292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:41:00.552717 containerd[1595]: time="2025-03-17T17:41:00.552191279Z" level=info msg="CreateContainer within sandbox \"c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:41:00.717526 containerd[1595]: time="2025-03-17T17:41:00.717373633Z" level=info msg="CreateContainer within sandbox \"c7b42621afadef7c4f12c749cabdac1cb227f6b87236af2198d59d09290955f8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"47fcfe9e277d725646f63ae9fc9b7336bf4763e88b1f0fe046b0b4e48c674eab\"" Mar 17 17:41:00.718481 containerd[1595]: time="2025-03-17T17:41:00.718447982Z" level=info msg="StartContainer for \"47fcfe9e277d725646f63ae9fc9b7336bf4763e88b1f0fe046b0b4e48c674eab\"" Mar 17 17:41:00.749972 kubelet[2894]: I0317 17:41:00.749892 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-779d48f5d9-9lw4k" podStartSLOduration=38.489962539 podStartE2EDuration="52.749872531s" podCreationTimestamp="2025-03-17 17:40:08 +0000 UTC" firstStartedPulling="2025-03-17 17:40:45.369548877 +0000 UTC m=+56.664212736" lastFinishedPulling="2025-03-17 17:40:59.629458869 +0000 UTC m=+70.924122728" observedRunningTime="2025-03-17 17:41:00.749464868 +0000 UTC m=+72.044128737" watchObservedRunningTime="2025-03-17 17:41:00.749872531 +0000 UTC m=+72.044536390" Mar 17 17:41:00.892248 containerd[1595]: time="2025-03-17T17:41:00.892171719Z" level=info msg="StartContainer for \"47fcfe9e277d725646f63ae9fc9b7336bf4763e88b1f0fe046b0b4e48c674eab\" returns successfully" Mar 17 17:41:01.700387 kubelet[2894]: I0317 17:41:01.700348 2894 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:41:01.787621 kubelet[2894]: E0317 17:41:01.787560 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:01.918781 kubelet[2894]: I0317 17:41:01.918674 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-779d48f5d9-dsbpp" podStartSLOduration=39.39428963 podStartE2EDuration="53.918646615s" podCreationTimestamp="2025-03-17 17:40:08 +0000 UTC" firstStartedPulling="2025-03-17 17:40:46.025449147 +0000 UTC m=+57.320113006" lastFinishedPulling="2025-03-17 17:41:00.549806132 +0000 UTC m=+71.844469991" observedRunningTime="2025-03-17 17:41:01.917931538 +0000 UTC m=+73.212595408" watchObservedRunningTime="2025-03-17 17:41:01.918646615 +0000 UTC m=+73.213310514" Mar 17 17:41:02.234525 systemd[1]: Started sshd@16-10.0.0.27:22-10.0.0.1:44600.service - OpenSSH per-connection server daemon (10.0.0.1:44600). Mar 17 17:41:02.342086 sshd[6207]: Accepted publickey for core from 10.0.0.1 port 44600 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:02.344154 sshd-session[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:02.349900 systemd-logind[1578]: New session 17 of user core. Mar 17 17:41:02.354635 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:41:02.506284 sshd[6232]: Connection closed by 10.0.0.1 port 44600 Mar 17 17:41:02.506568 sshd-session[6207]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:02.511010 systemd[1]: sshd@16-10.0.0.27:22-10.0.0.1:44600.service: Deactivated successfully. Mar 17 17:41:02.513625 systemd-logind[1578]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:41:02.513798 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:41:02.515037 systemd-logind[1578]: Removed session 17. Mar 17 17:41:02.702461 kubelet[2894]: I0317 17:41:02.702419 2894 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:41:03.213746 containerd[1595]: time="2025-03-17T17:41:03.213678248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:03.227883 containerd[1595]: time="2025-03-17T17:41:03.227827345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 17:41:03.242207 containerd[1595]: time="2025-03-17T17:41:03.242147005Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:03.288716 containerd[1595]: time="2025-03-17T17:41:03.288632806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:03.289341 containerd[1595]: time="2025-03-17T17:41:03.289298038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 2.73916221s" Mar 17 17:41:03.289419 containerd[1595]: time="2025-03-17T17:41:03.289339116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 17:41:03.291698 containerd[1595]: time="2025-03-17T17:41:03.291661741Z" level=info msg="CreateContainer within sandbox \"ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:41:03.546869 containerd[1595]: time="2025-03-17T17:41:03.546746627Z" level=info msg="CreateContainer within sandbox \"ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1cb76df7d1ff46ec7e600823f9dd944841b54b0d2c289ee5eb5ac3e3f3e5df9c\"" Mar 17 17:41:03.547606 containerd[1595]: time="2025-03-17T17:41:03.547411108Z" level=info msg="StartContainer for \"1cb76df7d1ff46ec7e600823f9dd944841b54b0d2c289ee5eb5ac3e3f3e5df9c\"" Mar 17 17:41:03.636625 containerd[1595]: time="2025-03-17T17:41:03.636574356Z" level=info msg="StartContainer for \"1cb76df7d1ff46ec7e600823f9dd944841b54b0d2c289ee5eb5ac3e3f3e5df9c\" returns successfully" Mar 17 17:41:03.639153 containerd[1595]: time="2025-03-17T17:41:03.638868297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:41:04.704751 kubelet[2894]: I0317 17:41:04.704691 2894 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:41:04.789253 kubelet[2894]: E0317 17:41:04.788110 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:04.789253 kubelet[2894]: E0317 17:41:04.789068 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:07.413456 containerd[1595]: time="2025-03-17T17:41:07.413387081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:07.418630 containerd[1595]: time="2025-03-17T17:41:07.418548832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 17:41:07.424485 containerd[1595]: time="2025-03-17T17:41:07.424430458Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:07.441347 containerd[1595]: time="2025-03-17T17:41:07.441280159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:07.442247 containerd[1595]: time="2025-03-17T17:41:07.442182268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 3.803257143s" Mar 17 17:41:07.442390 containerd[1595]: time="2025-03-17T17:41:07.442247742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 17:41:07.444844 containerd[1595]: time="2025-03-17T17:41:07.444810989Z" level=info msg="CreateContainer within sandbox \"ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:41:07.480636 containerd[1595]: time="2025-03-17T17:41:07.480550406Z" level=info msg="CreateContainer within sandbox \"ac2b8e72c4ac2662e128644045c83f8c431b4e048312935dd957b6cbb5ea209a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"997d65538c6ae761e4d9b7e298943db4177017d52f9249310155a12e5071943c\"" Mar 17 17:41:07.481390 containerd[1595]: time="2025-03-17T17:41:07.481354199Z" level=info msg="StartContainer for \"997d65538c6ae761e4d9b7e298943db4177017d52f9249310155a12e5071943c\"" Mar 17 17:41:07.520659 systemd[1]: Started sshd@17-10.0.0.27:22-10.0.0.1:55386.service - OpenSSH per-connection server daemon (10.0.0.1:55386). Mar 17 17:41:07.590638 sshd[6317]: Accepted publickey for core from 10.0.0.1 port 55386 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:07.636412 sshd-session[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:07.640817 systemd-logind[1578]: New session 18 of user core. Mar 17 17:41:07.647535 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:41:07.653632 containerd[1595]: time="2025-03-17T17:41:07.653585458Z" level=info msg="StartContainer for \"997d65538c6ae761e4d9b7e298943db4177017d52f9249310155a12e5071943c\" returns successfully" Mar 17 17:41:07.742516 kubelet[2894]: I0317 17:41:07.742443 2894 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-24zxx" podStartSLOduration=39.100343188 podStartE2EDuration="59.742423516s" podCreationTimestamp="2025-03-17 17:40:08 +0000 UTC" firstStartedPulling="2025-03-17 17:40:46.800986607 +0000 UTC m=+58.095650466" lastFinishedPulling="2025-03-17 17:41:07.443066925 +0000 UTC m=+78.737730794" observedRunningTime="2025-03-17 17:41:07.741991287 +0000 UTC m=+79.036655166" watchObservedRunningTime="2025-03-17 17:41:07.742423516 +0000 UTC m=+79.037087385" Mar 17 17:41:07.781294 sshd[6338]: Connection closed by 10.0.0.1 port 55386 Mar 17 17:41:07.781633 sshd-session[6317]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:07.786418 systemd[1]: sshd@17-10.0.0.27:22-10.0.0.1:55386.service: Deactivated successfully. Mar 17 17:41:07.789583 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:41:07.790703 systemd-logind[1578]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:41:07.791829 systemd-logind[1578]: Removed session 18. Mar 17 17:41:07.963914 kubelet[2894]: I0317 17:41:07.963860 2894 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:41:07.963914 kubelet[2894]: I0317 17:41:07.963893 2894 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:41:12.791516 systemd[1]: Started sshd@18-10.0.0.27:22-10.0.0.1:55388.service - OpenSSH per-connection server daemon (10.0.0.1:55388). Mar 17 17:41:12.834311 sshd[6351]: Accepted publickey for core from 10.0.0.1 port 55388 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:12.836313 sshd-session[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:12.841151 systemd-logind[1578]: New session 19 of user core. Mar 17 17:41:12.846731 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:41:12.976537 sshd[6354]: Connection closed by 10.0.0.1 port 55388 Mar 17 17:41:12.977074 sshd-session[6351]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:12.991493 systemd[1]: Started sshd@19-10.0.0.27:22-10.0.0.1:55394.service - OpenSSH per-connection server daemon (10.0.0.1:55394). Mar 17 17:41:12.992031 systemd[1]: sshd@18-10.0.0.27:22-10.0.0.1:55388.service: Deactivated successfully. Mar 17 17:41:12.996460 systemd-logind[1578]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:41:12.997561 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:41:12.998807 systemd-logind[1578]: Removed session 19. Mar 17 17:41:13.031142 sshd[6363]: Accepted publickey for core from 10.0.0.1 port 55394 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:13.032991 sshd-session[6363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:13.038840 systemd-logind[1578]: New session 20 of user core. Mar 17 17:41:13.051522 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:41:14.201820 sshd[6369]: Connection closed by 10.0.0.1 port 55394 Mar 17 17:41:14.202319 sshd-session[6363]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:14.208696 systemd[1]: Started sshd@20-10.0.0.27:22-10.0.0.1:55408.service - OpenSSH per-connection server daemon (10.0.0.1:55408). Mar 17 17:41:14.209373 systemd[1]: sshd@19-10.0.0.27:22-10.0.0.1:55394.service: Deactivated successfully. Mar 17 17:41:14.214439 systemd-logind[1578]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:41:14.216308 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:41:14.217476 systemd-logind[1578]: Removed session 20. Mar 17 17:41:14.246661 sshd[6376]: Accepted publickey for core from 10.0.0.1 port 55408 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:14.248018 sshd-session[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:14.252071 systemd-logind[1578]: New session 21 of user core. Mar 17 17:41:14.257529 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:41:17.509674 sshd[6382]: Connection closed by 10.0.0.1 port 55408 Mar 17 17:41:17.510321 sshd-session[6376]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:17.519483 systemd[1]: Started sshd@21-10.0.0.27:22-10.0.0.1:50686.service - OpenSSH per-connection server daemon (10.0.0.1:50686). Mar 17 17:41:17.519956 systemd[1]: sshd@20-10.0.0.27:22-10.0.0.1:55408.service: Deactivated successfully. Mar 17 17:41:17.524485 systemd-logind[1578]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:41:17.524501 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:41:17.526034 systemd-logind[1578]: Removed session 21. Mar 17 17:41:17.562238 sshd[6415]: Accepted publickey for core from 10.0.0.1 port 50686 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:17.563952 sshd-session[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:17.568399 systemd-logind[1578]: New session 22 of user core. Mar 17 17:41:17.578558 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:41:17.862785 sshd[6421]: Connection closed by 10.0.0.1 port 50686 Mar 17 17:41:17.863396 sshd-session[6415]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:17.876742 systemd[1]: Started sshd@22-10.0.0.27:22-10.0.0.1:50702.service - OpenSSH per-connection server daemon (10.0.0.1:50702). Mar 17 17:41:17.877484 systemd[1]: sshd@21-10.0.0.27:22-10.0.0.1:50686.service: Deactivated successfully. Mar 17 17:41:17.882163 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:41:17.884963 systemd-logind[1578]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:41:17.886669 systemd-logind[1578]: Removed session 22. Mar 17 17:41:17.916562 sshd[6429]: Accepted publickey for core from 10.0.0.1 port 50702 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:17.918412 sshd-session[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:17.924162 systemd-logind[1578]: New session 23 of user core. Mar 17 17:41:17.929763 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:41:18.050121 sshd[6434]: Connection closed by 10.0.0.1 port 50702 Mar 17 17:41:18.050612 sshd-session[6429]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:18.055173 systemd[1]: sshd@22-10.0.0.27:22-10.0.0.1:50702.service: Deactivated successfully. Mar 17 17:41:18.057714 systemd-logind[1578]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:41:18.058091 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:41:18.059407 systemd-logind[1578]: Removed session 23. Mar 17 17:41:23.062511 systemd[1]: Started sshd@23-10.0.0.27:22-10.0.0.1:50704.service - OpenSSH per-connection server daemon (10.0.0.1:50704). Mar 17 17:41:23.094107 sshd[6446]: Accepted publickey for core from 10.0.0.1 port 50704 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:23.095841 sshd-session[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:23.099999 systemd-logind[1578]: New session 24 of user core. Mar 17 17:41:23.117561 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:41:23.223372 sshd[6449]: Connection closed by 10.0.0.1 port 50704 Mar 17 17:41:23.227450 systemd[1]: sshd@23-10.0.0.27:22-10.0.0.1:50704.service: Deactivated successfully. Mar 17 17:41:23.223719 sshd-session[6446]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:23.230012 systemd-logind[1578]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:41:23.230103 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:41:23.231158 systemd-logind[1578]: Removed session 24. Mar 17 17:41:28.243541 systemd[1]: Started sshd@24-10.0.0.27:22-10.0.0.1:34764.service - OpenSSH per-connection server daemon (10.0.0.1:34764). Mar 17 17:41:28.275569 sshd[6473]: Accepted publickey for core from 10.0.0.1 port 34764 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:28.277247 sshd-session[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:28.281412 systemd-logind[1578]: New session 25 of user core. Mar 17 17:41:28.289623 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:41:28.394859 sshd[6476]: Connection closed by 10.0.0.1 port 34764 Mar 17 17:41:28.395201 sshd-session[6473]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:28.399077 systemd[1]: sshd@24-10.0.0.27:22-10.0.0.1:34764.service: Deactivated successfully. Mar 17 17:41:28.401457 systemd-logind[1578]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:41:28.401555 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:41:28.402715 systemd-logind[1578]: Removed session 25. Mar 17 17:41:29.215094 kubelet[2894]: I0317 17:41:29.215040 2894 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:41:30.787868 kubelet[2894]: E0317 17:41:30.787812 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:30.788786 kubelet[2894]: E0317 17:41:30.788662 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:32.313835 kubelet[2894]: E0317 17:41:32.313759 2894 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:33.405491 systemd[1]: Started sshd@25-10.0.0.27:22-10.0.0.1:34780.service - OpenSSH per-connection server daemon (10.0.0.1:34780). Mar 17 17:41:33.450549 sshd[6534]: Accepted publickey for core from 10.0.0.1 port 34780 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:33.452886 sshd-session[6534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:33.457188 systemd-logind[1578]: New session 26 of user core. Mar 17 17:41:33.474688 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:41:33.601414 sshd[6537]: Connection closed by 10.0.0.1 port 34780 Mar 17 17:41:33.601864 sshd-session[6534]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:33.605857 systemd[1]: sshd@25-10.0.0.27:22-10.0.0.1:34780.service: Deactivated successfully. Mar 17 17:41:33.609171 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:41:33.609181 systemd-logind[1578]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:41:33.610487 systemd-logind[1578]: Removed session 26. Mar 17 17:41:38.616764 systemd[1]: Started sshd@26-10.0.0.27:22-10.0.0.1:34916.service - OpenSSH per-connection server daemon (10.0.0.1:34916). Mar 17 17:41:38.665440 sshd[6550]: Accepted publickey for core from 10.0.0.1 port 34916 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:38.667894 sshd-session[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:38.674638 systemd-logind[1578]: New session 27 of user core. Mar 17 17:41:38.679630 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:41:38.810398 sshd[6553]: Connection closed by 10.0.0.1 port 34916 Mar 17 17:41:38.810823 sshd-session[6550]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:38.815531 systemd[1]: sshd@26-10.0.0.27:22-10.0.0.1:34916.service: Deactivated successfully. Mar 17 17:41:38.818746 systemd-logind[1578]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:41:38.818934 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:41:38.820292 systemd-logind[1578]: Removed session 27.