Mar 17 17:49:34.943604 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:49:34.943632 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:49:34.943645 kernel: BIOS-provided physical RAM map: Mar 17 17:49:34.943653 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:49:34.943660 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:49:34.943667 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:49:34.943676 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 17 17:49:34.943683 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 17 17:49:34.943691 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 17:49:34.943701 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 17:49:34.943709 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:49:34.943724 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:49:34.943731 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:49:34.943739 kernel: NX (Execute Disable) protection: active Mar 17 17:49:34.943748 kernel: APIC: Static calls initialized Mar 17 17:49:34.943762 kernel: SMBIOS 2.8 present. Mar 17 17:49:34.943770 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 17 17:49:34.943778 kernel: Hypervisor detected: KVM Mar 17 17:49:34.943785 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:49:34.943793 kernel: kvm-clock: using sched offset of 3398956513 cycles Mar 17 17:49:34.943801 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:49:34.943810 kernel: tsc: Detected 2794.750 MHz processor Mar 17 17:49:34.943819 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:49:34.943828 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:49:34.943836 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 17 17:49:34.943846 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:49:34.943854 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:49:34.943862 kernel: Using GB pages for direct mapping Mar 17 17:49:34.943878 kernel: ACPI: Early table checksum verification disabled Mar 17 17:49:34.943888 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 17 17:49:34.943902 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:34.943912 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:34.943921 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:34.943934 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 17 17:49:34.943943 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:34.943952 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:34.943960 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:34.943969 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:49:34.943977 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 17 17:49:34.943986 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 17 17:49:34.944001 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 17 17:49:34.944032 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 17 17:49:34.944047 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 17 17:49:34.944055 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 17 17:49:34.944064 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 17 17:49:34.944072 kernel: No NUMA configuration found Mar 17 17:49:34.944081 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 17 17:49:34.944089 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 17 17:49:34.944103 kernel: Zone ranges: Mar 17 17:49:34.944111 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:49:34.944119 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 17 17:49:34.944127 kernel: Normal empty Mar 17 17:49:34.944136 kernel: Movable zone start for each node Mar 17 17:49:34.944144 kernel: Early memory node ranges Mar 17 17:49:34.944152 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:49:34.944160 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 17 17:49:34.944168 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 17 17:49:34.944180 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:49:34.944191 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:49:34.944199 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 17:49:34.944208 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:49:34.944217 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:49:34.944226 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:49:34.944235 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:49:34.944244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:49:34.944253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:49:34.944264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:49:34.944273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:49:34.944282 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:49:34.944290 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:49:34.944299 kernel: TSC deadline timer available Mar 17 17:49:34.944307 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 17:49:34.944316 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:49:34.944325 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 17:49:34.944336 kernel: kvm-guest: setup PV sched yield Mar 17 17:49:34.944344 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 17:49:34.944356 kernel: Booting paravirtualized kernel on KVM Mar 17 17:49:34.944365 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:49:34.944374 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 17 17:49:34.944383 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 17 17:49:34.944392 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 17 17:49:34.944400 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 17:49:34.944409 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:49:34.944417 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:49:34.944426 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:49:34.944438 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:49:34.944446 kernel: random: crng init done Mar 17 17:49:34.944454 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:49:34.944463 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:49:34.944471 kernel: Fallback order for Node 0: 0 Mar 17 17:49:34.944480 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 17 17:49:34.944489 kernel: Policy zone: DMA32 Mar 17 17:49:34.944498 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:49:34.944510 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 136900K reserved, 0K cma-reserved) Mar 17 17:49:34.944519 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:49:34.944528 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:49:34.944536 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:49:34.944545 kernel: Dynamic Preempt: voluntary Mar 17 17:49:34.944554 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:49:34.944564 kernel: rcu: RCU event tracing is enabled. Mar 17 17:49:34.944573 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:49:34.944582 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:49:34.944594 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:49:34.944603 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:49:34.944614 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:49:34.944623 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:49:34.944632 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 17:49:34.944640 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:49:34.944649 kernel: Console: colour VGA+ 80x25 Mar 17 17:49:34.944658 kernel: printk: console [ttyS0] enabled Mar 17 17:49:34.944669 kernel: ACPI: Core revision 20230628 Mar 17 17:49:34.944681 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:49:34.944692 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:49:34.944702 kernel: x2apic enabled Mar 17 17:49:34.944713 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:49:34.944723 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 17 17:49:34.944732 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 17 17:49:34.944741 kernel: kvm-guest: setup PV IPIs Mar 17 17:49:34.944761 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:49:34.944770 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:49:34.944780 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 17 17:49:34.944789 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:49:34.944798 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:49:34.944810 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:49:34.944819 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:49:34.944828 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:49:34.944838 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:49:34.944847 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:49:34.944859 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:49:34.944880 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:49:34.944890 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:49:34.944900 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:49:34.944909 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:49:34.944919 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:49:34.944929 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:49:34.944938 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:49:34.944951 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:49:34.944960 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:49:34.944969 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:49:34.944979 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:49:34.944988 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:49:34.944998 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:49:34.945007 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:49:34.945029 kernel: landlock: Up and running. Mar 17 17:49:34.945038 kernel: SELinux: Initializing. Mar 17 17:49:34.945050 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:49:34.945060 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:49:34.945069 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:49:34.945078 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:49:34.945088 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:49:34.945097 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:49:34.945109 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:49:34.945118 kernel: ... version: 0 Mar 17 17:49:34.945130 kernel: ... bit width: 48 Mar 17 17:49:34.945140 kernel: ... generic registers: 6 Mar 17 17:49:34.945149 kernel: ... value mask: 0000ffffffffffff Mar 17 17:49:34.945158 kernel: ... max period: 00007fffffffffff Mar 17 17:49:34.945168 kernel: ... fixed-purpose events: 0 Mar 17 17:49:34.945177 kernel: ... event mask: 000000000000003f Mar 17 17:49:34.945186 kernel: signal: max sigframe size: 1776 Mar 17 17:49:34.945209 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:49:34.945220 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:49:34.945229 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:49:34.945250 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:49:34.945268 kernel: .... node #0, CPUs: #1 #2 #3 Mar 17 17:49:34.945293 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:49:34.945303 kernel: smpboot: Max logical packages: 1 Mar 17 17:49:34.945327 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 17 17:49:34.945337 kernel: devtmpfs: initialized Mar 17 17:49:34.945360 kernel: x86/mm: Memory block size: 128MB Mar 17 17:49:34.945370 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:49:34.945394 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:49:34.945413 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:49:34.945431 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:49:34.945440 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:49:34.945449 kernel: audit: type=2000 audit(1742233774.025:1): state=initialized audit_enabled=0 res=1 Mar 17 17:49:34.945458 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:49:34.945467 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:49:34.945476 kernel: cpuidle: using governor menu Mar 17 17:49:34.945485 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:49:34.945495 kernel: dca service started, version 1.12.1 Mar 17 17:49:34.945508 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 17:49:34.945517 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 17 17:49:34.945527 kernel: PCI: Using configuration type 1 for base access Mar 17 17:49:34.945536 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:49:34.945545 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:49:34.945555 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:49:34.945564 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:49:34.945573 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:49:34.945582 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:49:34.945593 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:49:34.945602 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:49:34.945611 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:49:34.945621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:49:34.945630 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:49:34.945640 kernel: ACPI: Interpreter enabled Mar 17 17:49:34.945649 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:49:34.945659 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:49:34.945669 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:49:34.945682 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:49:34.945691 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:49:34.945700 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:49:34.945948 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:49:34.946115 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:49:34.946256 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:49:34.946268 kernel: PCI host bridge to bus 0000:00 Mar 17 17:49:34.946418 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:49:34.946549 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:49:34.946683 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:49:34.946928 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 17:49:34.947112 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 17:49:34.947301 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 17:49:34.947436 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:49:34.947607 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:49:34.947764 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 17:49:34.947921 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 17 17:49:34.948091 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 17 17:49:34.948231 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 17 17:49:34.948372 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:49:34.948530 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:49:34.948723 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 17:49:34.948919 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 17 17:49:34.949086 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 17 17:49:34.949237 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:49:34.949385 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:49:34.949530 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 17 17:49:34.949706 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 17 17:49:34.949928 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:49:34.950113 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 17 17:49:34.950255 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 17 17:49:34.950396 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 17 17:49:34.950533 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 17 17:49:34.950688 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:49:34.950844 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:49:34.951033 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:49:34.951189 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 17 17:49:34.951338 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 17 17:49:34.951506 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:49:34.951667 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 17:49:34.951683 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:49:34.951699 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:49:34.951710 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:49:34.951720 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:49:34.951731 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:49:34.951741 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:49:34.951751 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:49:34.951761 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:49:34.951770 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:49:34.951780 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:49:34.951793 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:49:34.951803 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:49:34.951814 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:49:34.951824 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:49:34.951833 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:49:34.951840 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:49:34.951848 kernel: iommu: Default domain type: Translated Mar 17 17:49:34.951856 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:49:34.951863 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:49:34.951884 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:49:34.951893 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:49:34.951900 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 17 17:49:34.952047 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:49:34.952181 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:49:34.952331 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:49:34.952345 kernel: vgaarb: loaded Mar 17 17:49:34.952356 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:49:34.952371 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:49:34.952382 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:49:34.952392 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:49:34.952403 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:49:34.952414 kernel: pnp: PnP ACPI init Mar 17 17:49:34.952559 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 17:49:34.952571 kernel: pnp: PnP ACPI: found 6 devices Mar 17 17:49:34.952580 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:49:34.952592 kernel: NET: Registered PF_INET protocol family Mar 17 17:49:34.952600 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:49:34.952608 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:49:34.952616 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:49:34.952625 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:49:34.952632 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:49:34.952640 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:49:34.952648 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:49:34.952656 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:49:34.952667 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:49:34.952675 kernel: NET: Registered PF_XDP protocol family Mar 17 17:49:34.952793 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:49:34.952915 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:49:34.953169 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:49:34.953293 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 17:49:34.953402 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 17:49:34.953509 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 17:49:34.953523 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:49:34.953531 kernel: Initialise system trusted keyrings Mar 17 17:49:34.953539 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:49:34.953547 kernel: Key type asymmetric registered Mar 17 17:49:34.953555 kernel: Asymmetric key parser 'x509' registered Mar 17 17:49:34.953563 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:49:34.953571 kernel: io scheduler mq-deadline registered Mar 17 17:49:34.953578 kernel: io scheduler kyber registered Mar 17 17:49:34.953586 kernel: io scheduler bfq registered Mar 17 17:49:34.953597 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:49:34.953605 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:49:34.953613 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:49:34.953621 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 17:49:34.953629 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:49:34.953637 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:49:34.953645 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:49:34.953653 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:49:34.953661 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:49:34.953816 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 17:49:34.953832 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:49:34.953979 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 17:49:34.954121 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T17:49:34 UTC (1742233774) Mar 17 17:49:34.954235 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 17:49:34.954245 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:49:34.954253 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:49:34.954261 kernel: Segment Routing with IPv6 Mar 17 17:49:34.954273 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:49:34.954282 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:49:34.954289 kernel: Key type dns_resolver registered Mar 17 17:49:34.954297 kernel: IPI shorthand broadcast: enabled Mar 17 17:49:34.954305 kernel: sched_clock: Marking stable (682002591, 108871108)->(847081608, -56207909) Mar 17 17:49:34.954313 kernel: registered taskstats version 1 Mar 17 17:49:34.954321 kernel: Loading compiled-in X.509 certificates Mar 17 17:49:34.954329 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:49:34.954337 kernel: Key type .fscrypt registered Mar 17 17:49:34.954348 kernel: Key type fscrypt-provisioning registered Mar 17 17:49:34.954356 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:49:34.954364 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:49:34.954372 kernel: ima: No architecture policies found Mar 17 17:49:34.954380 kernel: clk: Disabling unused clocks Mar 17 17:49:34.954388 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:49:34.954396 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:49:34.954404 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:49:34.954412 kernel: Run /init as init process Mar 17 17:49:34.954422 kernel: with arguments: Mar 17 17:49:34.954430 kernel: /init Mar 17 17:49:34.954437 kernel: with environment: Mar 17 17:49:34.954445 kernel: HOME=/ Mar 17 17:49:34.954453 kernel: TERM=linux Mar 17 17:49:34.954461 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:49:34.954471 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:49:34.954481 systemd[1]: Detected virtualization kvm. Mar 17 17:49:34.954492 systemd[1]: Detected architecture x86-64. Mar 17 17:49:34.954500 systemd[1]: Running in initrd. Mar 17 17:49:34.954508 systemd[1]: No hostname configured, using default hostname. Mar 17 17:49:34.954516 systemd[1]: Hostname set to . Mar 17 17:49:34.954525 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:49:34.954534 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:49:34.954542 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:34.954550 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:34.954562 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:49:34.954585 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:49:34.954599 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:49:34.954611 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:49:34.954624 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:49:34.954639 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:49:34.954651 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:34.954662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:49:34.954674 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:49:34.954685 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:49:34.954697 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:49:34.954708 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:49:34.954719 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:49:34.954734 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:49:34.954746 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:49:34.954757 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:49:34.954769 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:34.954780 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:34.954792 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:34.954803 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:49:34.954814 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:49:34.954829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:49:34.954844 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:49:34.954855 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:49:34.954867 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:49:34.954888 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:49:34.954899 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:34.954911 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:49:34.954922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:34.954962 systemd-journald[193]: Collecting audit messages is disabled. Mar 17 17:49:34.954993 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:49:34.955006 systemd-journald[193]: Journal started Mar 17 17:49:34.955036 systemd-journald[193]: Runtime Journal (/run/log/journal/8f12977a54e34390b8535b5341f5e4e6) is 6.0M, max 48.4M, 42.3M free. Mar 17 17:49:34.947326 systemd-modules-load[195]: Inserted module 'overlay' Mar 17 17:49:34.992219 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:49:34.992250 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:49:34.992263 kernel: Bridge firewalling registered Mar 17 17:49:34.976073 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 17 17:49:34.994189 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:49:34.995875 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:34.998525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:35.001046 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:35.016318 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:49:35.020172 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:49:35.023395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:49:35.026618 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:49:35.037100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:35.039902 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:35.041534 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:35.044451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:49:35.057296 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:49:35.061243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:49:35.074114 dracut-cmdline[230]: dracut-dracut-053 Mar 17 17:49:35.077302 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:49:35.096492 systemd-resolved[232]: Positive Trust Anchors: Mar 17 17:49:35.096518 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:49:35.096554 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:49:35.099611 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 17 17:49:35.100911 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:49:35.107705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:49:35.171057 kernel: SCSI subsystem initialized Mar 17 17:49:35.180035 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:49:35.191086 kernel: iscsi: registered transport (tcp) Mar 17 17:49:35.213058 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:49:35.213140 kernel: QLogic iSCSI HBA Driver Mar 17 17:49:35.269883 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:49:35.280163 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:49:35.308458 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:49:35.308501 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:49:35.309527 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:49:35.351054 kernel: raid6: avx2x4 gen() 29878 MB/s Mar 17 17:49:35.368042 kernel: raid6: avx2x2 gen() 29109 MB/s Mar 17 17:49:35.385196 kernel: raid6: avx2x1 gen() 25192 MB/s Mar 17 17:49:35.385254 kernel: raid6: using algorithm avx2x4 gen() 29878 MB/s Mar 17 17:49:35.438206 kernel: raid6: .... xor() 7001 MB/s, rmw enabled Mar 17 17:49:35.438279 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:49:35.459040 kernel: xor: automatically using best checksumming function avx Mar 17 17:49:35.633052 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:49:35.644085 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:49:35.653199 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:49:35.665009 systemd-udevd[417]: Using default interface naming scheme 'v255'. Mar 17 17:49:35.708475 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:49:35.711712 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:49:35.728583 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Mar 17 17:49:35.757318 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:49:35.818146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:49:35.948137 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:35.984085 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:49:35.995572 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:49:36.001224 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 17 17:49:36.046062 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:49:36.046263 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:49:36.046279 kernel: AES CTR mode by8 optimization enabled Mar 17 17:49:36.046292 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:49:36.046306 kernel: GPT:9289727 != 19775487 Mar 17 17:49:36.046328 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:49:36.046341 kernel: GPT:9289727 != 19775487 Mar 17 17:49:36.046353 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:49:36.046366 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:49:36.004935 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:49:36.005065 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:36.054137 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:49:36.057174 kernel: libata version 3.00 loaded. Mar 17 17:49:36.057210 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:49:36.058725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:36.061610 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:36.070342 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:49:36.108723 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:49:36.108749 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:49:36.108981 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (460) Mar 17 17:49:36.108997 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:49:36.109198 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) Mar 17 17:49:36.109214 kernel: scsi host0: ahci Mar 17 17:49:36.109393 kernel: scsi host1: ahci Mar 17 17:49:36.109579 kernel: scsi host2: ahci Mar 17 17:49:36.109771 kernel: scsi host3: ahci Mar 17 17:49:36.109934 kernel: scsi host4: ahci Mar 17 17:49:36.110097 kernel: scsi host5: ahci Mar 17 17:49:36.110239 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 17 17:49:36.110257 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 17 17:49:36.110275 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 17 17:49:36.110286 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 17 17:49:36.110296 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 17 17:49:36.110310 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 17 17:49:36.072884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:36.087867 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:49:36.106254 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:49:36.112890 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:49:36.136094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:49:36.169446 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:49:36.169570 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:49:36.173316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:36.181238 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:49:36.181324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:36.186827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:49:36.201207 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:49:36.202329 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:49:36.205667 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:49:36.222721 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:49:36.231528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:36.248906 disk-uuid[564]: Primary Header is updated. Mar 17 17:49:36.248906 disk-uuid[564]: Secondary Entries is updated. Mar 17 17:49:36.248906 disk-uuid[564]: Secondary Header is updated. Mar 17 17:49:36.253243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:49:36.258062 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:49:36.417293 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:49:36.417392 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 17:49:36.417412 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:49:36.419042 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:49:36.420048 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:49:36.420123 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:49:36.421286 kernel: ata3.00: applying bridge limits Mar 17 17:49:36.422035 kernel: ata3.00: configured for UDMA/100 Mar 17 17:49:36.424054 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:49:36.443038 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:49:36.468063 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:49:36.481762 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:49:36.481778 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:49:37.298793 disk-uuid[579]: The operation has completed successfully. Mar 17 17:49:37.300293 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:49:37.337379 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:49:37.337503 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:49:37.357478 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:49:37.361241 sh[595]: Success Mar 17 17:49:37.374044 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:49:37.409939 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:49:37.424127 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:49:37.429197 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:49:37.461727 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:49:37.461763 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:37.461775 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:49:37.462773 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:49:37.463567 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:49:37.468924 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:49:37.469738 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:49:37.490268 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:49:37.492832 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:49:37.506790 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:49:37.506858 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:37.506869 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:49:37.510051 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:49:37.520926 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:49:37.522868 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:49:37.535748 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:49:37.547476 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:49:37.672721 ignition[697]: Ignition 2.20.0 Mar 17 17:49:37.672748 ignition[697]: Stage: fetch-offline Mar 17 17:49:37.672801 ignition[697]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:37.672811 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:37.672927 ignition[697]: parsed url from cmdline: "" Mar 17 17:49:37.672931 ignition[697]: no config URL provided Mar 17 17:49:37.672937 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:49:37.672945 ignition[697]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:49:37.680062 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:49:37.672977 ignition[697]: op(1): [started] loading QEMU firmware config module Mar 17 17:49:37.672982 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:49:37.705775 ignition[697]: op(1): [finished] loading QEMU firmware config module Mar 17 17:49:37.710303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:49:37.733771 systemd-networkd[784]: lo: Link UP Mar 17 17:49:37.733784 systemd-networkd[784]: lo: Gained carrier Mar 17 17:49:37.735785 systemd-networkd[784]: Enumeration completed Mar 17 17:49:37.736300 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:37.736305 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:49:37.737698 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:49:37.740351 systemd-networkd[784]: eth0: Link UP Mar 17 17:49:37.740356 systemd-networkd[784]: eth0: Gained carrier Mar 17 17:49:37.740364 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:37.741616 systemd[1]: Reached target network.target - Network. Mar 17 17:49:37.765125 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:49:37.767574 ignition[697]: parsing config with SHA512: 9a1ff60ac7ee47d75f93998e8737f138cc059de2d551356535770d9af1933c524035b9044eafa38622a52fb4cb12065dd5755d3533586072e95db31ac3e6f25d Mar 17 17:49:37.775338 unknown[697]: fetched base config from "system" Mar 17 17:49:37.775357 unknown[697]: fetched user config from "qemu" Mar 17 17:49:37.776054 ignition[697]: fetch-offline: fetch-offline passed Mar 17 17:49:37.776155 ignition[697]: Ignition finished successfully Mar 17 17:49:37.779649 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:49:37.782924 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:49:37.794333 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:49:37.863094 ignition[788]: Ignition 2.20.0 Mar 17 17:49:37.863107 ignition[788]: Stage: kargs Mar 17 17:49:37.863344 ignition[788]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:37.863359 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:37.864441 ignition[788]: kargs: kargs passed Mar 17 17:49:37.864492 ignition[788]: Ignition finished successfully Mar 17 17:49:37.871054 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:49:37.885346 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:49:37.931134 ignition[796]: Ignition 2.20.0 Mar 17 17:49:37.931149 ignition[796]: Stage: disks Mar 17 17:49:37.932525 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:37.932549 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:37.938108 ignition[796]: disks: disks passed Mar 17 17:49:37.938162 ignition[796]: Ignition finished successfully Mar 17 17:49:37.941776 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:49:37.944069 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:49:37.944151 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:49:37.946370 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:49:37.948832 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:49:37.950836 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:49:37.964203 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:49:37.967127 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.104 Mar 17 17:49:37.967140 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Mar 17 17:49:37.978742 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:49:37.985320 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:49:37.999113 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:49:38.086040 kernel: EXT4-fs (vda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:49:38.086205 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:49:38.086878 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:49:38.098479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:49:38.101123 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:49:38.101544 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:49:38.110687 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Mar 17 17:49:38.110731 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:49:38.110749 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:38.110763 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:49:38.101605 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:49:38.113990 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:49:38.101638 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:49:38.116419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:49:38.132868 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:49:38.134815 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:49:38.177521 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:49:38.182325 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:49:38.188071 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:49:38.193492 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:49:38.290222 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:49:38.300359 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:49:38.303810 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:49:38.309034 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:49:38.327564 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:49:38.377910 ignition[931]: INFO : Ignition 2.20.0 Mar 17 17:49:38.377910 ignition[931]: INFO : Stage: mount Mar 17 17:49:38.379963 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:38.379963 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:38.379963 ignition[931]: INFO : mount: mount passed Mar 17 17:49:38.379963 ignition[931]: INFO : Ignition finished successfully Mar 17 17:49:38.381858 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:49:38.395211 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:49:38.460360 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:49:38.469468 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:49:38.481081 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Mar 17 17:49:38.483257 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:49:38.483280 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:38.483293 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:49:38.487037 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:49:38.488722 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:49:38.518025 ignition[957]: INFO : Ignition 2.20.0 Mar 17 17:49:38.518025 ignition[957]: INFO : Stage: files Mar 17 17:49:38.519931 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:38.519931 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:38.519931 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:49:38.519931 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:49:38.519931 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:49:38.527452 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:49:38.527452 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:49:38.527452 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:49:38.527452 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 17:49:38.527452 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 17 17:49:38.523252 unknown[957]: wrote ssh authorized keys file for user: core Mar 17 17:49:38.588636 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:49:38.826694 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 17:49:38.826694 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:49:38.833447 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 17 17:49:39.182957 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:49:39.409250 systemd-networkd[784]: eth0: Gained IPv6LL Mar 17 17:49:39.986715 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:49:39.986715 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:49:39.990993 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:49:39.990993 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:49:39.990993 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:49:39.990993 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 17 17:49:39.990993 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:49:39.990993 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:49:39.990993 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 17 17:49:39.990993 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:49:40.024642 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:49:40.032908 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:49:40.035113 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:49:40.035113 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:49:40.035113 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:49:40.035113 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:49:40.035113 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:49:40.035113 ignition[957]: INFO : files: files passed Mar 17 17:49:40.035113 ignition[957]: INFO : Ignition finished successfully Mar 17 17:49:40.036517 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:49:40.048322 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:49:40.050866 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:49:40.053276 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:49:40.053413 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:49:40.063571 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:49:40.067614 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:40.067614 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:40.071539 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:40.071222 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:49:40.073414 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:49:40.088366 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:49:40.121312 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:49:40.121488 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:49:40.124443 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:49:40.126885 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:49:40.127061 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:49:40.128223 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:49:40.150381 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:49:40.169435 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:49:40.183123 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:49:40.184644 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:40.187162 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:49:40.189270 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:49:40.189433 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:49:40.191881 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:49:40.193467 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:49:40.195563 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:49:40.197691 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:49:40.199745 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:49:40.201960 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:49:40.204230 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:49:40.206632 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:49:40.208690 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:49:40.210955 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:49:40.212764 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:49:40.212939 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:49:40.215300 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:49:40.216757 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:40.218931 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:49:40.219057 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:40.221396 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:49:40.221538 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:49:40.224380 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:49:40.224506 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:49:40.226699 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:49:40.228497 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:49:40.232079 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:40.233562 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:49:40.236204 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:49:40.238645 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:49:40.238806 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:49:40.240636 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:49:40.240793 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:49:40.243147 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:49:40.243296 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:49:40.246359 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:49:40.246508 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:49:40.256244 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:49:40.259233 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:49:40.260625 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:49:40.260927 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:40.263346 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:49:40.263585 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:49:40.269367 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:49:40.269517 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:49:40.277353 ignition[1013]: INFO : Ignition 2.20.0 Mar 17 17:49:40.277353 ignition[1013]: INFO : Stage: umount Mar 17 17:49:40.277353 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:40.277353 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:49:40.277353 ignition[1013]: INFO : umount: umount passed Mar 17 17:49:40.277353 ignition[1013]: INFO : Ignition finished successfully Mar 17 17:49:40.278376 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:49:40.278527 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:49:40.280989 systemd[1]: Stopped target network.target - Network. Mar 17 17:49:40.283263 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:49:40.283323 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:49:40.285806 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:49:40.285857 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:49:40.287736 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:49:40.287784 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:49:40.289899 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:49:40.289948 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:49:40.291839 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:49:40.294486 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:49:40.297965 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:49:40.300075 systemd-networkd[784]: eth0: DHCPv6 lease lost Mar 17 17:49:40.301609 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:49:40.301764 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:49:40.305302 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:49:40.305435 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:49:40.308707 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:49:40.308809 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:40.324211 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:49:40.326214 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:49:40.326297 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:49:40.330298 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:49:40.330387 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:40.333481 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:49:40.333556 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:40.339710 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:49:40.341034 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:49:40.344250 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:49:40.374078 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:49:40.374286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:49:40.379928 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:49:40.380136 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:49:40.382957 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:49:40.383007 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:40.385508 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:49:40.385547 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:40.388045 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:49:40.388096 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:49:40.390655 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:49:40.390703 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:49:40.393127 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:49:40.393176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:40.403350 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:49:40.405222 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:49:40.405318 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:40.408217 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:49:40.408287 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:40.411068 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:49:40.411128 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:40.411248 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:49:40.411304 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:40.412207 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:49:40.412347 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:49:40.497991 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:49:40.498158 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:49:40.500666 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:49:40.502815 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:49:40.502880 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:49:40.520317 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:49:40.530436 systemd[1]: Switching root. Mar 17 17:49:40.559334 systemd-journald[193]: Journal stopped Mar 17 17:49:41.981491 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 17 17:49:41.981553 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:49:41.981571 kernel: SELinux: policy capability open_perms=1 Mar 17 17:49:41.981582 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:49:41.981593 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:49:41.981610 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:49:41.981629 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:49:41.981650 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:49:41.981688 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:49:41.981709 kernel: audit: type=1403 audit(1742233781.014:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:49:41.981735 systemd[1]: Successfully loaded SELinux policy in 45.685ms. Mar 17 17:49:41.981781 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.487ms. Mar 17 17:49:41.981811 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:49:41.981824 systemd[1]: Detected virtualization kvm. Mar 17 17:49:41.981835 systemd[1]: Detected architecture x86-64. Mar 17 17:49:41.981847 systemd[1]: Detected first boot. Mar 17 17:49:41.981858 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:49:41.981873 zram_generator::config[1057]: No configuration found. Mar 17 17:49:41.981885 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:49:41.981897 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:49:41.981909 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:49:41.981921 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:49:41.981939 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:49:41.981951 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:49:41.981962 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:49:41.981976 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:49:41.981988 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:49:41.982001 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:49:41.982366 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:49:41.982701 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:49:41.983106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:41.983120 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:41.983132 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:49:41.983144 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:49:41.983160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:49:41.983172 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:49:41.983185 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:49:41.983201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:41.983213 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:49:41.983225 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:49:41.983237 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:49:41.983252 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:49:41.983264 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:41.983276 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:49:41.983288 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:49:41.983301 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:49:41.983314 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:49:41.983326 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:49:41.983338 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:41.984990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:41.985048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:41.985067 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:49:41.985079 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:49:41.985094 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:49:41.985106 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:49:41.985120 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:41.985132 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:49:41.985144 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:49:41.985157 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:49:41.985172 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:49:41.985183 systemd[1]: Reached target machines.target - Containers. Mar 17 17:49:41.985195 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:49:41.985208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:41.985220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:49:41.985232 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:49:41.985244 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:41.985256 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:49:41.985268 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:41.985282 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:49:41.985294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:41.985306 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:49:41.985319 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:49:41.985331 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:49:41.985343 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:49:41.985355 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:49:41.985367 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:49:41.985382 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:49:41.985395 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:49:41.985407 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:49:41.985443 systemd-journald[1127]: Collecting audit messages is disabled. Mar 17 17:49:41.985466 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:49:41.985478 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:49:41.985490 kernel: fuse: init (API version 7.39) Mar 17 17:49:41.985502 systemd[1]: Stopped verity-setup.service. Mar 17 17:49:41.985516 kernel: loop: module loaded Mar 17 17:49:41.985528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:41.985541 systemd-journald[1127]: Journal started Mar 17 17:49:41.985563 systemd-journald[1127]: Runtime Journal (/run/log/journal/8f12977a54e34390b8535b5341f5e4e6) is 6.0M, max 48.4M, 42.3M free. Mar 17 17:49:41.709979 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:49:41.730632 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:49:41.731163 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:49:41.991091 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:49:42.021054 kernel: ACPI: bus type drm_connector registered Mar 17 17:49:42.027101 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:49:42.047359 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:49:42.048766 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:49:42.049917 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:49:42.059083 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:49:42.060375 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:49:42.061705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:42.063628 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:49:42.063829 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:49:42.067115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:42.067335 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:42.068810 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:49:42.068985 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:49:42.070433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:42.070619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:42.072212 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:49:42.072413 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:49:42.074820 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:42.074992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:42.076630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:42.079640 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:49:42.081293 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:49:42.094088 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:49:42.107190 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:49:42.109899 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:49:42.111505 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:49:42.111564 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:49:42.139390 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:49:42.143690 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:49:42.148591 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:49:42.150518 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:42.158744 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:49:42.162189 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:49:42.165096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:49:42.167519 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:49:42.171951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:49:42.174570 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:49:42.191788 systemd-journald[1127]: Time spent on flushing to /var/log/journal/8f12977a54e34390b8535b5341f5e4e6 is 178.314ms for 951 entries. Mar 17 17:49:42.191788 systemd-journald[1127]: System Journal (/var/log/journal/8f12977a54e34390b8535b5341f5e4e6) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:49:42.422327 systemd-journald[1127]: Received client request to flush runtime journal. Mar 17 17:49:42.422410 kernel: loop0: detected capacity change from 0 to 218376 Mar 17 17:49:42.191468 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:49:42.199559 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:49:42.206694 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:49:42.209172 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:42.211105 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:49:42.213080 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:49:42.216273 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:49:42.223865 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:49:42.229820 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:49:42.241282 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:49:42.248190 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:49:42.250345 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:42.426000 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:49:42.433643 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:49:42.455645 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:49:42.451842 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 17 17:49:42.451870 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 17 17:49:42.474521 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:42.491146 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:49:42.530351 kernel: loop1: detected capacity change from 0 to 138184 Mar 17 17:49:42.583376 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:49:42.711401 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:49:42.735755 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:49:42.736806 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:49:42.759051 kernel: loop2: detected capacity change from 0 to 140992 Mar 17 17:49:42.762832 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 17 17:49:42.762865 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 17 17:49:42.770729 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:42.897055 kernel: loop3: detected capacity change from 0 to 218376 Mar 17 17:49:42.924066 kernel: loop4: detected capacity change from 0 to 138184 Mar 17 17:49:42.949075 kernel: loop5: detected capacity change from 0 to 140992 Mar 17 17:49:42.968885 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:49:42.969763 (sd-merge)[1198]: Merged extensions into '/usr'. Mar 17 17:49:43.028544 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:49:43.028568 systemd[1]: Reloading... Mar 17 17:49:43.132335 zram_generator::config[1223]: No configuration found. Mar 17 17:49:43.497338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:49:43.567058 systemd[1]: Reloading finished in 537 ms. Mar 17 17:49:43.629786 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:49:43.643344 systemd[1]: Starting ensure-sysext.service... Mar 17 17:49:43.645874 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:49:43.662324 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:49:43.662356 systemd[1]: Reloading... Mar 17 17:49:43.692690 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:49:43.750285 zram_generator::config[1287]: No configuration found. Mar 17 17:49:43.766864 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:49:43.767687 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:49:43.768898 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:49:43.769367 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 17 17:49:43.769509 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 17 17:49:43.773522 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:49:43.773633 systemd-tmpfiles[1261]: Skipping /boot Mar 17 17:49:43.800069 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:49:43.800086 systemd-tmpfiles[1261]: Skipping /boot Mar 17 17:49:43.886171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:49:43.937604 systemd[1]: Reloading finished in 274 ms. Mar 17 17:49:43.956217 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:49:43.962622 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:49:43.982404 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:49:43.990582 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:49:43.993630 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:49:44.000194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:49:44.003421 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:49:44.009618 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:44.009788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:44.011188 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:44.013670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:44.016232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:44.017685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:44.022315 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:49:44.023583 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:44.024620 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:44.024976 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:44.027778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:44.027947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:44.029954 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:44.030816 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:44.046335 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:44.046587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:44.055575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:44.065136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:44.096085 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:44.097487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:44.097725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:44.098964 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:49:44.100976 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:49:44.103182 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:49:44.105173 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:49:44.107242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:44.107432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:44.109618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:44.109910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:44.112475 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:44.112737 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:44.113325 augenrules[1362]: No rules Mar 17 17:49:44.114748 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:49:44.114972 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:49:44.126896 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:44.139419 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:49:44.163494 augenrules[1375]: /sbin/augenrules: No change Mar 17 17:49:44.163851 augenrules[1397]: No rules Mar 17 17:49:44.164411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:44.165967 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:44.168239 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:49:44.183405 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:44.196044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:44.197211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:44.198767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:49:44.204244 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:49:44.205412 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:49:44.205575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:44.206825 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:49:44.208840 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:49:44.209266 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:49:44.216856 systemd-resolved[1333]: Positive Trust Anchors: Mar 17 17:49:44.216865 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:49:44.216897 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:49:44.230106 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:44.230307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:44.232257 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:49:44.232475 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:49:44.232958 systemd-resolved[1333]: Defaulting to hostname 'linux'. Mar 17 17:49:44.234399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:44.234613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:44.236449 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:49:44.238177 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:44.238392 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:44.240322 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:49:44.244411 systemd[1]: Finished ensure-sysext.service. Mar 17 17:49:44.247543 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Mar 17 17:49:44.251700 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:49:44.253274 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:49:44.253356 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:49:44.261171 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:49:44.271742 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:49:44.283343 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:49:44.304952 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:49:44.331955 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:49:44.339901 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:49:44.360710 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1417) Mar 17 17:49:44.365767 systemd-networkd[1423]: lo: Link UP Mar 17 17:49:44.366552 systemd-networkd[1423]: lo: Gained carrier Mar 17 17:49:44.368200 systemd-networkd[1423]: Enumeration completed Mar 17 17:49:44.368496 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:49:44.371007 systemd[1]: Reached target network.target - Network. Mar 17 17:49:44.380306 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:49:44.397519 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:44.397531 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:49:44.399117 systemd-networkd[1423]: eth0: Link UP Mar 17 17:49:44.399128 systemd-networkd[1423]: eth0: Gained carrier Mar 17 17:49:44.399140 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:44.437224 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:49:44.441196 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:49:44.451679 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:49:44.451715 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:49:44.451994 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:49:44.441136 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:49:44.441799 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Mar 17 17:49:44.445190 systemd-timesyncd[1415]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:49:44.445304 systemd-timesyncd[1415]: Initial clock synchronization to Mon 2025-03-17 17:49:44.683594 UTC. Mar 17 17:49:44.477707 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:49:44.487204 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 17:49:44.484200 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:49:44.489023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:44.510038 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:49:44.517972 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:49:44.635102 kernel: kvm_amd: TSC scaling supported Mar 17 17:49:44.635207 kernel: kvm_amd: Nested Virtualization enabled Mar 17 17:49:44.635226 kernel: kvm_amd: Nested Paging enabled Mar 17 17:49:44.636401 kernel: kvm_amd: LBR virtualization supported Mar 17 17:49:44.636418 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 17 17:49:44.637196 kernel: kvm_amd: Virtual GIF supported Mar 17 17:49:44.661063 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:49:44.705050 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:44.713626 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:49:44.724708 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:49:44.749363 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:49:44.789376 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:49:44.803600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:49:44.807858 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:49:44.811182 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:49:44.814108 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:49:44.817336 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:49:44.819570 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:49:44.829889 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:49:44.834350 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:49:44.834410 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:49:44.837182 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:49:44.840450 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:49:44.850388 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:49:44.866746 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:49:44.875975 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:49:44.878682 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:49:44.884039 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:49:44.885851 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:49:44.886051 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:49:44.886102 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:49:44.887959 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:49:44.894877 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:49:44.898165 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:49:44.904192 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:49:44.913238 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:49:44.917260 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:49:44.923130 jq[1468]: false Mar 17 17:49:44.924088 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:49:44.937243 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:49:44.943872 extend-filesystems[1469]: Found loop3 Mar 17 17:49:44.943872 extend-filesystems[1469]: Found loop4 Mar 17 17:49:44.943872 extend-filesystems[1469]: Found loop5 Mar 17 17:49:44.943872 extend-filesystems[1469]: Found sr0 Mar 17 17:49:44.984175 extend-filesystems[1469]: Found vda Mar 17 17:49:44.984175 extend-filesystems[1469]: Found vda1 Mar 17 17:49:44.984175 extend-filesystems[1469]: Found vda2 Mar 17 17:49:44.984175 extend-filesystems[1469]: Found vda3 Mar 17 17:49:44.984175 extend-filesystems[1469]: Found usr Mar 17 17:49:44.984175 extend-filesystems[1469]: Found vda4 Mar 17 17:49:44.984175 extend-filesystems[1469]: Found vda6 Mar 17 17:49:44.984175 extend-filesystems[1469]: Found vda7 Mar 17 17:49:44.984175 extend-filesystems[1469]: Found vda9 Mar 17 17:49:44.984175 extend-filesystems[1469]: Checking size of /dev/vda9 Mar 17 17:49:44.972314 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:49:45.018680 extend-filesystems[1469]: Resized partition /dev/vda9 Mar 17 17:49:44.988427 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:49:45.019486 dbus-daemon[1467]: [system] SELinux support is enabled Mar 17 17:49:45.029477 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:49:45.066108 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1430) Mar 17 17:49:44.999349 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:49:45.006555 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:49:45.007357 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:49:45.029374 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:49:45.067314 jq[1489]: true Mar 17 17:49:45.044922 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:49:45.046240 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:49:45.104405 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:49:45.145186 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:49:45.151820 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:49:45.152178 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:49:45.152661 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:49:45.152995 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:49:45.157173 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:49:45.157495 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:49:45.179380 update_engine[1485]: I20250317 17:49:45.179226 1485 main.cc:92] Flatcar Update Engine starting Mar 17 17:49:45.182239 update_engine[1485]: I20250317 17:49:45.182024 1485 update_check_scheduler.cc:74] Next update check in 7m4s Mar 17 17:49:45.197835 (ntainerd)[1495]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:49:45.205999 jq[1494]: true Mar 17 17:49:45.228677 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:49:45.294431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:49:45.294519 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:49:45.300418 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:49:45.300467 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:49:45.324371 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:49:45.577185 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:49:45.745255 systemd-logind[1479]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:49:45.745290 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:49:45.746033 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:49:45.745987 systemd-logind[1479]: New seat seat0. Mar 17 17:49:45.780807 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:49:45.812657 tar[1493]: linux-amd64/LICENSE Mar 17 17:49:45.813436 tar[1493]: linux-amd64/helm Mar 17 17:49:45.840605 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:49:45.895875 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:49:45.991097 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:49:46.015890 systemd-networkd[1423]: eth0: Gained IPv6LL Mar 17 17:49:46.023113 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:49:46.035802 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:49:46.036373 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:49:46.045882 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:49:46.071304 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:49:46.098500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:49:46.117538 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:49:46.237805 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:49:46.279314 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:49:46.281918 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:49:46.282243 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:49:46.288344 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:49:46.290249 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:49:46.295225 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:49:46.298769 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:49:46.371255 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:49:46.371255 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:49:46.371255 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:49:46.406822 extend-filesystems[1469]: Resized filesystem in /dev/vda9 Mar 17 17:49:46.379093 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:49:46.379428 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:49:46.402208 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:49:46.474589 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:49:46.476901 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:49:46.611817 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:49:46.856564 containerd[1495]: time="2025-03-17T17:49:46.853398817Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:49:47.016654 containerd[1495]: time="2025-03-17T17:49:47.016223988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:47.036876 containerd[1495]: time="2025-03-17T17:49:47.036588874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:47.036876 containerd[1495]: time="2025-03-17T17:49:47.036746529Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:49:47.036876 containerd[1495]: time="2025-03-17T17:49:47.036792552Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:49:47.038789 containerd[1495]: time="2025-03-17T17:49:47.038337281Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:49:47.038789 containerd[1495]: time="2025-03-17T17:49:47.038402182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:47.038789 containerd[1495]: time="2025-03-17T17:49:47.038548840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:47.038789 containerd[1495]: time="2025-03-17T17:49:47.038567348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:47.038968 containerd[1495]: time="2025-03-17T17:49:47.038934369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:47.038968 containerd[1495]: time="2025-03-17T17:49:47.038953257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:47.039056 containerd[1495]: time="2025-03-17T17:49:47.038976018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:47.039056 containerd[1495]: time="2025-03-17T17:49:47.038992844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:47.039210 containerd[1495]: time="2025-03-17T17:49:47.039147446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:47.044165 containerd[1495]: time="2025-03-17T17:49:47.039495016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:47.044165 containerd[1495]: time="2025-03-17T17:49:47.039806626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:47.044165 containerd[1495]: time="2025-03-17T17:49:47.039830780Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:49:47.044165 containerd[1495]: time="2025-03-17T17:49:47.040660263Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:49:47.044165 containerd[1495]: time="2025-03-17T17:49:47.040743783Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:49:47.161951 containerd[1495]: time="2025-03-17T17:49:47.159166510Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:49:47.161951 containerd[1495]: time="2025-03-17T17:49:47.160926753Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:49:47.163431 containerd[1495]: time="2025-03-17T17:49:47.162835695Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:49:47.163499 containerd[1495]: time="2025-03-17T17:49:47.163452019Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:49:47.163529 containerd[1495]: time="2025-03-17T17:49:47.163501158Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.163880027Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.164810297Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165086214Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165132647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165153891Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165196298Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165243736Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165288816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165310276Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165330730Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165377225Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165405910Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165449791Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:49:47.166350 containerd[1495]: time="2025-03-17T17:49:47.165486684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165536089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165561945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165584419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165626733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165648960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165666055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165712775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165799432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165832778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165873822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165892186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165910418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165955734Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.165995732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.166776 containerd[1495]: time="2025-03-17T17:49:47.166050231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.177650 containerd[1495]: time="2025-03-17T17:49:47.166068861Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:49:47.177650 containerd[1495]: time="2025-03-17T17:49:47.166158459Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:49:47.177650 containerd[1495]: time="2025-03-17T17:49:47.166219148Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:49:47.177650 containerd[1495]: time="2025-03-17T17:49:47.166238486Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:49:47.177650 containerd[1495]: time="2025-03-17T17:49:47.166279037Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:49:47.177650 containerd[1495]: time="2025-03-17T17:49:47.166296254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.177650 containerd[1495]: time="2025-03-17T17:49:47.166318912Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:49:47.177650 containerd[1495]: time="2025-03-17T17:49:47.166338516Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:49:47.177650 containerd[1495]: time="2025-03-17T17:49:47.166376311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:49:47.172182 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.167207391Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.167362782Z" level=info msg="Connect containerd service" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.167470119Z" level=info msg="using legacy CRI server" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.167484660Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.167838021Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.169407612Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.169927687Z" level=info msg="Start subscribing containerd event" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.170008727Z" level=info msg="Start recovering state" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.170128465Z" level=info msg="Start event monitor" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.170158686Z" level=info msg="Start snapshots syncer" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.170175820Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.170186058Z" level=info msg="Start streaming server" Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.170804494Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.170979334Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:49:47.178618 containerd[1495]: time="2025-03-17T17:49:47.172375089Z" level=info msg="containerd successfully booted in 0.327575s" Mar 17 17:49:48.318043 tar[1493]: linux-amd64/README.md Mar 17 17:49:48.358239 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:49:50.387140 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:49:50.419822 systemd[1]: Started sshd@0-10.0.0.104:22-10.0.0.1:51330.service - OpenSSH per-connection server daemon (10.0.0.1:51330). Mar 17 17:49:50.733848 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 51330 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:49:50.740873 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:50.842755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:50.853408 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:49:50.854356 systemd-logind[1479]: New session 1 of user core. Mar 17 17:49:50.855706 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:49:50.873879 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:49:50.896580 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:49:50.948791 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:49:51.037544 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:49:51.046742 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:49:51.333103 systemd[1585]: Queued start job for default target default.target. Mar 17 17:49:51.515115 systemd[1585]: Created slice app.slice - User Application Slice. Mar 17 17:49:51.515153 systemd[1585]: Reached target paths.target - Paths. Mar 17 17:49:51.515180 systemd[1585]: Reached target timers.target - Timers. Mar 17 17:49:51.524496 systemd[1585]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:49:51.569356 systemd[1585]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:49:51.570388 systemd[1585]: Reached target sockets.target - Sockets. Mar 17 17:49:51.570408 systemd[1585]: Reached target basic.target - Basic System. Mar 17 17:49:51.570459 systemd[1585]: Reached target default.target - Main User Target. Mar 17 17:49:51.570500 systemd[1585]: Startup finished in 500ms. Mar 17 17:49:51.570880 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:49:51.593395 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:49:51.596707 systemd[1]: Startup finished in 852ms (kernel) + 6.270s (initrd) + 10.627s (userspace) = 17.750s. Mar 17 17:49:51.681758 systemd[1]: Started sshd@1-10.0.0.104:22-10.0.0.1:51334.service - OpenSSH per-connection server daemon (10.0.0.1:51334). Mar 17 17:49:51.851716 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 51334 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:49:51.854218 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:51.876835 systemd-logind[1479]: New session 2 of user core. Mar 17 17:49:51.888486 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:49:52.152629 sshd[1607]: Connection closed by 10.0.0.1 port 51334 Mar 17 17:49:52.149077 systemd[1]: Started sshd@2-10.0.0.104:22-10.0.0.1:51344.service - OpenSSH per-connection server daemon (10.0.0.1:51344). Mar 17 17:49:52.153199 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:52.161091 systemd[1]: sshd@1-10.0.0.104:22-10.0.0.1:51334.service: Deactivated successfully. Mar 17 17:49:52.163916 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:49:52.171717 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:49:52.173862 systemd-logind[1479]: Removed session 2. Mar 17 17:49:52.209825 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 51344 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:49:52.212276 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:52.217955 systemd-logind[1479]: New session 3 of user core. Mar 17 17:49:52.227353 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:49:52.285146 sshd[1614]: Connection closed by 10.0.0.1 port 51344 Mar 17 17:49:52.285540 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:52.297574 systemd[1]: sshd@2-10.0.0.104:22-10.0.0.1:51344.service: Deactivated successfully. Mar 17 17:49:52.299630 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:49:52.301505 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:49:52.312964 systemd[1]: Started sshd@3-10.0.0.104:22-10.0.0.1:51350.service - OpenSSH per-connection server daemon (10.0.0.1:51350). Mar 17 17:49:52.315204 systemd-logind[1479]: Removed session 3. Mar 17 17:49:52.373043 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 51350 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:49:52.375280 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:52.383306 systemd-logind[1479]: New session 4 of user core. Mar 17 17:49:52.395506 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:49:52.464964 sshd[1621]: Connection closed by 10.0.0.1 port 51350 Mar 17 17:49:52.464467 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:52.478448 systemd[1]: sshd@3-10.0.0.104:22-10.0.0.1:51350.service: Deactivated successfully. Mar 17 17:49:52.481169 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:49:52.483459 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:49:52.495611 systemd[1]: Started sshd@4-10.0.0.104:22-10.0.0.1:51366.service - OpenSSH per-connection server daemon (10.0.0.1:51366). Mar 17 17:49:52.496844 systemd-logind[1479]: Removed session 4. Mar 17 17:49:52.542908 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 51366 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:49:52.546885 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:52.554900 systemd-logind[1479]: New session 5 of user core. Mar 17 17:49:52.565360 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:49:52.661759 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:49:52.662282 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:52.682919 sudo[1629]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:52.685576 sshd[1628]: Connection closed by 10.0.0.1 port 51366 Mar 17 17:49:52.686576 sshd-session[1626]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:52.696735 systemd[1]: sshd@4-10.0.0.104:22-10.0.0.1:51366.service: Deactivated successfully. Mar 17 17:49:52.699310 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:49:52.701993 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:49:52.819552 systemd[1]: Started sshd@5-10.0.0.104:22-10.0.0.1:51378.service - OpenSSH per-connection server daemon (10.0.0.1:51378). Mar 17 17:49:52.822163 systemd-logind[1479]: Removed session 5. Mar 17 17:49:52.872037 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 51378 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:49:52.874320 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:52.880285 systemd-logind[1479]: New session 6 of user core. Mar 17 17:49:52.904374 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:49:52.973210 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:49:52.973625 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:52.978972 sudo[1640]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:52.988252 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:49:52.988647 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:53.008678 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:49:53.066372 augenrules[1662]: No rules Mar 17 17:49:53.068813 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:49:53.069157 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:49:53.070997 sudo[1639]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:53.072654 sshd[1638]: Connection closed by 10.0.0.1 port 51378 Mar 17 17:49:53.073005 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:53.082301 systemd[1]: sshd@5-10.0.0.104:22-10.0.0.1:51378.service: Deactivated successfully. Mar 17 17:49:53.084196 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:49:53.086212 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:49:53.112005 systemd[1]: Started sshd@6-10.0.0.104:22-10.0.0.1:51386.service - OpenSSH per-connection server daemon (10.0.0.1:51386). Mar 17 17:49:53.114426 systemd-logind[1479]: Removed session 6. Mar 17 17:49:53.135601 kubelet[1581]: E0317 17:49:53.135514 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:49:53.141964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:49:53.142263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:49:53.142797 systemd[1]: kubelet.service: Consumed 4.182s CPU time. Mar 17 17:49:53.159557 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 51386 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:49:53.161890 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:53.167694 systemd-logind[1479]: New session 7 of user core. Mar 17 17:49:53.178422 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:49:53.236953 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:49:53.237411 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:54.076413 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:49:54.078429 (dockerd)[1694]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:49:54.759194 dockerd[1694]: time="2025-03-17T17:49:54.759121190Z" level=info msg="Starting up" Mar 17 17:49:55.148572 dockerd[1694]: time="2025-03-17T17:49:55.148418404Z" level=info msg="Loading containers: start." Mar 17 17:49:55.383077 kernel: Initializing XFRM netlink socket Mar 17 17:49:55.484150 systemd-networkd[1423]: docker0: Link UP Mar 17 17:49:55.539393 dockerd[1694]: time="2025-03-17T17:49:55.539309561Z" level=info msg="Loading containers: done." Mar 17 17:49:55.566520 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2089558988-merged.mount: Deactivated successfully. Mar 17 17:49:55.567640 dockerd[1694]: time="2025-03-17T17:49:55.567508310Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:49:55.567857 dockerd[1694]: time="2025-03-17T17:49:55.567822091Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:49:55.568092 dockerd[1694]: time="2025-03-17T17:49:55.568065946Z" level=info msg="Daemon has completed initialization" Mar 17 17:49:55.797250 dockerd[1694]: time="2025-03-17T17:49:55.797047853Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:49:55.797373 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:49:56.508458 containerd[1495]: time="2025-03-17T17:49:56.508406893Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 17:49:57.218109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3659703969.mount: Deactivated successfully. Mar 17 17:49:59.118127 containerd[1495]: time="2025-03-17T17:49:59.118052048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:59.167294 containerd[1495]: time="2025-03-17T17:49:59.167123483Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=28682430" Mar 17 17:49:59.171131 containerd[1495]: time="2025-03-17T17:49:59.171048129Z" level=info msg="ImageCreate event name:\"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:59.181440 containerd[1495]: time="2025-03-17T17:49:59.181368919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:59.182583 containerd[1495]: time="2025-03-17T17:49:59.182499212Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"28679230\" in 2.674039795s" Mar 17 17:49:59.182583 containerd[1495]: time="2025-03-17T17:49:59.182554508Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 17 17:49:59.183776 containerd[1495]: time="2025-03-17T17:49:59.183519435Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 17:50:00.629634 containerd[1495]: time="2025-03-17T17:50:00.629553002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:00.630491 containerd[1495]: time="2025-03-17T17:50:00.630441821Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=24779684" Mar 17 17:50:00.631630 containerd[1495]: time="2025-03-17T17:50:00.631591967Z" level=info msg="ImageCreate event name:\"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:00.634502 containerd[1495]: time="2025-03-17T17:50:00.634454729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:00.635446 containerd[1495]: time="2025-03-17T17:50:00.635418760Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"26267292\" in 1.45184041s" Mar 17 17:50:00.635507 containerd[1495]: time="2025-03-17T17:50:00.635447993Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 17 17:50:00.635988 containerd[1495]: time="2025-03-17T17:50:00.635912344Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 17:50:02.932181 containerd[1495]: time="2025-03-17T17:50:02.932122105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:02.932814 containerd[1495]: time="2025-03-17T17:50:02.932782823Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=19171419" Mar 17 17:50:02.933974 containerd[1495]: time="2025-03-17T17:50:02.933941294Z" level=info msg="ImageCreate event name:\"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:02.937287 containerd[1495]: time="2025-03-17T17:50:02.937244755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:02.938424 containerd[1495]: time="2025-03-17T17:50:02.938378312Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"20659045\" in 2.302420642s" Mar 17 17:50:02.938472 containerd[1495]: time="2025-03-17T17:50:02.938424158Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 17 17:50:02.939004 containerd[1495]: time="2025-03-17T17:50:02.938977134Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:50:03.392424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:50:03.407207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:03.599903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:03.604966 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:50:03.872288 kubelet[1962]: E0317 17:50:03.872139 1962 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:50:03.879209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:50:03.879414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:50:06.810368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257616842.mount: Deactivated successfully. Mar 17 17:50:08.395791 containerd[1495]: time="2025-03-17T17:50:08.395697229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:08.502661 containerd[1495]: time="2025-03-17T17:50:08.502555534Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918185" Mar 17 17:50:08.556660 containerd[1495]: time="2025-03-17T17:50:08.556592408Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:08.579539 containerd[1495]: time="2025-03-17T17:50:08.579440458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:08.580896 containerd[1495]: time="2025-03-17T17:50:08.580831767Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 5.641814608s" Mar 17 17:50:08.580962 containerd[1495]: time="2025-03-17T17:50:08.580898585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 17 17:50:08.581754 containerd[1495]: time="2025-03-17T17:50:08.581698098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 17:50:10.451609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058549049.mount: Deactivated successfully. Mar 17 17:50:12.010986 containerd[1495]: time="2025-03-17T17:50:12.010877692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:12.011943 containerd[1495]: time="2025-03-17T17:50:12.011884595Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Mar 17 17:50:12.018937 containerd[1495]: time="2025-03-17T17:50:12.018871846Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:12.023492 containerd[1495]: time="2025-03-17T17:50:12.023350821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:12.024569 containerd[1495]: time="2025-03-17T17:50:12.024508138Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.442775783s" Mar 17 17:50:12.024569 containerd[1495]: time="2025-03-17T17:50:12.024551023Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 17 17:50:12.025689 containerd[1495]: time="2025-03-17T17:50:12.025635264Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:50:13.234206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537018990.mount: Deactivated successfully. Mar 17 17:50:13.243327 containerd[1495]: time="2025-03-17T17:50:13.243280260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:13.244149 containerd[1495]: time="2025-03-17T17:50:13.244104369Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 17 17:50:13.245266 containerd[1495]: time="2025-03-17T17:50:13.245226218Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:13.247638 containerd[1495]: time="2025-03-17T17:50:13.247597093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:13.248483 containerd[1495]: time="2025-03-17T17:50:13.248407508Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.222727815s" Mar 17 17:50:13.248531 containerd[1495]: time="2025-03-17T17:50:13.248488096Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 17:50:13.248981 containerd[1495]: time="2025-03-17T17:50:13.248959915Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 17:50:13.803425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267193072.mount: Deactivated successfully. Mar 17 17:50:14.129777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:50:14.140274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:14.710115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:14.716602 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:50:15.032898 kubelet[2071]: E0317 17:50:15.032719 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:50:15.038402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:50:15.038594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:50:16.719755 containerd[1495]: time="2025-03-17T17:50:16.719636064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:16.723046 containerd[1495]: time="2025-03-17T17:50:16.722596013Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Mar 17 17:50:16.725422 containerd[1495]: time="2025-03-17T17:50:16.725349033Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:16.731228 containerd[1495]: time="2025-03-17T17:50:16.731055648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:16.733681 containerd[1495]: time="2025-03-17T17:50:16.733616214Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.484616991s" Mar 17 17:50:16.733875 containerd[1495]: time="2025-03-17T17:50:16.733683025Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 17 17:50:18.827351 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:18.837240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:18.862327 systemd[1]: Reloading requested from client PID 2135 ('systemctl') (unit session-7.scope)... Mar 17 17:50:18.862341 systemd[1]: Reloading... Mar 17 17:50:18.956052 zram_generator::config[2177]: No configuration found. Mar 17 17:50:19.979078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:50:20.063770 systemd[1]: Reloading finished in 1201 ms. Mar 17 17:50:20.120078 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:50:20.120172 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:50:20.120451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:20.123718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:20.284626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:20.290309 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:50:20.331829 kubelet[2223]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:50:20.331829 kubelet[2223]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:50:20.331829 kubelet[2223]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:50:20.332282 kubelet[2223]: I0317 17:50:20.331907 2223 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:50:20.633831 kubelet[2223]: I0317 17:50:20.633763 2223 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:50:20.633831 kubelet[2223]: I0317 17:50:20.633797 2223 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:50:20.634120 kubelet[2223]: I0317 17:50:20.634091 2223 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:50:20.700600 kubelet[2223]: E0317 17:50:20.700535 2223 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:20.709027 kubelet[2223]: I0317 17:50:20.708971 2223 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:50:20.748704 kubelet[2223]: E0317 17:50:20.748655 2223 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:50:20.748704 kubelet[2223]: I0317 17:50:20.748697 2223 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:50:20.753998 kubelet[2223]: I0317 17:50:20.753957 2223 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:50:20.754262 kubelet[2223]: I0317 17:50:20.754221 2223 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:50:20.754439 kubelet[2223]: I0317 17:50:20.754248 2223 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:50:20.754439 kubelet[2223]: I0317 17:50:20.754437 2223 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:50:20.754632 kubelet[2223]: I0317 17:50:20.754446 2223 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:50:20.754632 kubelet[2223]: I0317 17:50:20.754617 2223 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:50:20.771307 kubelet[2223]: I0317 17:50:20.771244 2223 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:50:20.771307 kubelet[2223]: I0317 17:50:20.771275 2223 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:50:20.771307 kubelet[2223]: I0317 17:50:20.771296 2223 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:50:20.771307 kubelet[2223]: I0317 17:50:20.771309 2223 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:50:20.779592 kubelet[2223]: I0317 17:50:20.779550 2223 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:50:20.779996 kubelet[2223]: I0317 17:50:20.779967 2223 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:50:20.790981 kubelet[2223]: W0317 17:50:20.790940 2223 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:50:20.793696 kubelet[2223]: W0317 17:50:20.793616 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:20.793696 kubelet[2223]: E0317 17:50:20.793692 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:20.793914 kubelet[2223]: W0317 17:50:20.793722 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:20.793914 kubelet[2223]: E0317 17:50:20.793761 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:20.804472 kubelet[2223]: I0317 17:50:20.804424 2223 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:50:20.804472 kubelet[2223]: I0317 17:50:20.804483 2223 server.go:1287] "Started kubelet" Mar 17 17:50:20.812954 kubelet[2223]: I0317 17:50:20.812339 2223 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:50:20.812954 kubelet[2223]: I0317 17:50:20.812787 2223 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:50:20.812954 kubelet[2223]: I0317 17:50:20.812872 2223 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:50:20.813654 kubelet[2223]: I0317 17:50:20.813634 2223 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:50:20.814248 kubelet[2223]: I0317 17:50:20.814211 2223 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:50:20.814643 kubelet[2223]: I0317 17:50:20.814534 2223 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:50:20.815850 kubelet[2223]: E0317 17:50:20.815714 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:20.815850 kubelet[2223]: I0317 17:50:20.815759 2223 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:50:20.815994 kubelet[2223]: I0317 17:50:20.815959 2223 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:50:20.816248 kubelet[2223]: I0317 17:50:20.816046 2223 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:50:20.816576 kubelet[2223]: W0317 17:50:20.816522 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:20.816631 kubelet[2223]: E0317 17:50:20.816582 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:20.817342 kubelet[2223]: E0317 17:50:20.817219 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="200ms" Mar 17 17:50:20.817342 kubelet[2223]: E0317 17:50:20.817304 2223 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:50:20.817580 kubelet[2223]: I0317 17:50:20.817527 2223 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:50:20.817662 kubelet[2223]: I0317 17:50:20.817620 2223 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:50:20.818741 kubelet[2223]: I0317 17:50:20.818688 2223 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:50:20.830202 kubelet[2223]: E0317 17:50:20.828693 2223 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.104:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da86de4b888b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:50:20.804450484 +0000 UTC m=+0.509752768,LastTimestamp:2025-03-17 17:50:20.804450484 +0000 UTC m=+0.509752768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:50:20.838623 kubelet[2223]: I0317 17:50:20.838582 2223 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:50:20.838623 kubelet[2223]: I0317 17:50:20.838609 2223 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:50:20.838623 kubelet[2223]: I0317 17:50:20.838634 2223 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:50:20.844599 kubelet[2223]: I0317 17:50:20.844528 2223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:50:20.845888 kubelet[2223]: I0317 17:50:20.845861 2223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:50:20.845946 kubelet[2223]: I0317 17:50:20.845903 2223 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:50:20.845946 kubelet[2223]: I0317 17:50:20.845929 2223 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:50:20.845946 kubelet[2223]: I0317 17:50:20.845940 2223 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:50:20.846024 kubelet[2223]: E0317 17:50:20.845993 2223 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:50:20.881969 kubelet[2223]: W0317 17:50:20.881892 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:20.881969 kubelet[2223]: E0317 17:50:20.881964 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:20.916485 kubelet[2223]: E0317 17:50:20.916355 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:20.946686 kubelet[2223]: E0317 17:50:20.946638 2223 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:50:21.016897 kubelet[2223]: E0317 17:50:21.016829 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:21.018396 kubelet[2223]: E0317 17:50:21.018357 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="400ms" Mar 17 17:50:21.117736 kubelet[2223]: E0317 17:50:21.117684 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:21.147038 kubelet[2223]: E0317 17:50:21.146943 2223 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:50:21.218773 kubelet[2223]: E0317 17:50:21.218610 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:21.319555 kubelet[2223]: E0317 17:50:21.319492 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:21.419390 kubelet[2223]: E0317 17:50:21.419340 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="800ms" Mar 17 17:50:21.420371 kubelet[2223]: E0317 17:50:21.420350 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:21.521120 kubelet[2223]: E0317 17:50:21.520953 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:21.547186 kubelet[2223]: E0317 17:50:21.547134 2223 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:50:21.621830 kubelet[2223]: E0317 17:50:21.621768 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:21.722717 kubelet[2223]: E0317 17:50:21.722662 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:21.823440 kubelet[2223]: E0317 17:50:21.823309 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:21.850324 kubelet[2223]: W0317 17:50:21.850227 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:21.850324 kubelet[2223]: E0317 17:50:21.850305 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:21.923720 kubelet[2223]: E0317 17:50:21.923608 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:22.024335 kubelet[2223]: E0317 17:50:22.024258 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:22.125128 kubelet[2223]: E0317 17:50:22.125073 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:22.145916 kubelet[2223]: W0317 17:50:22.145859 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:22.145916 kubelet[2223]: E0317 17:50:22.145916 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:22.220624 kubelet[2223]: E0317 17:50:22.220558 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="1.6s" Mar 17 17:50:22.225752 kubelet[2223]: E0317 17:50:22.225695 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:22.233492 kubelet[2223]: W0317 17:50:22.233440 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:22.233568 kubelet[2223]: E0317 17:50:22.233507 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:22.326696 kubelet[2223]: E0317 17:50:22.326638 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:22.347941 kubelet[2223]: E0317 17:50:22.347840 2223 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:50:22.380731 kubelet[2223]: W0317 17:50:22.380545 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:22.380731 kubelet[2223]: E0317 17:50:22.380617 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:22.427360 kubelet[2223]: E0317 17:50:22.427271 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:22.528100 kubelet[2223]: E0317 17:50:22.528035 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:22.628776 kubelet[2223]: E0317 17:50:22.628690 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:22.729845 kubelet[2223]: E0317 17:50:22.729670 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:22.749928 kubelet[2223]: I0317 17:50:22.749867 2223 policy_none.go:49] "None policy: Start" Mar 17 17:50:22.749928 kubelet[2223]: I0317 17:50:22.749918 2223 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:50:22.749928 kubelet[2223]: I0317 17:50:22.749944 2223 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:50:22.767955 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:50:22.784001 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:50:22.802217 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:50:22.803640 kubelet[2223]: I0317 17:50:22.803598 2223 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:50:22.803915 kubelet[2223]: I0317 17:50:22.803861 2223 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:50:22.803915 kubelet[2223]: I0317 17:50:22.803882 2223 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:50:22.804543 kubelet[2223]: I0317 17:50:22.804178 2223 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:50:22.804999 kubelet[2223]: E0317 17:50:22.804930 2223 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:50:22.804999 kubelet[2223]: E0317 17:50:22.804982 2223 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:50:22.863984 kubelet[2223]: E0317 17:50:22.863918 2223 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:22.905870 kubelet[2223]: I0317 17:50:22.905812 2223 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:50:22.906424 kubelet[2223]: E0317 17:50:22.906374 2223 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Mar 17 17:50:23.108482 kubelet[2223]: I0317 17:50:23.108445 2223 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:50:23.108965 kubelet[2223]: E0317 17:50:23.108913 2223 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Mar 17 17:50:23.452960 kubelet[2223]: W0317 17:50:23.452786 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:23.452960 kubelet[2223]: E0317 17:50:23.452856 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:23.510723 kubelet[2223]: I0317 17:50:23.510679 2223 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:50:23.511237 kubelet[2223]: E0317 17:50:23.511182 2223 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Mar 17 17:50:23.821735 kubelet[2223]: E0317 17:50:23.821594 2223 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="3.2s" Mar 17 17:50:23.957433 systemd[1]: Created slice kubepods-burstable-pod8e9d41ca3360da7294237da4d8ef33c7.slice - libcontainer container kubepods-burstable-pod8e9d41ca3360da7294237da4d8ef33c7.slice. Mar 17 17:50:23.976804 kubelet[2223]: E0317 17:50:23.976750 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:50:23.980404 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice - libcontainer container kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 17 17:50:23.982206 kubelet[2223]: E0317 17:50:23.982166 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:50:23.983710 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice - libcontainer container kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 17 17:50:23.985250 kubelet[2223]: E0317 17:50:23.985221 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:50:24.038041 kubelet[2223]: I0317 17:50:24.037939 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:50:24.038041 kubelet[2223]: I0317 17:50:24.037997 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e9d41ca3360da7294237da4d8ef33c7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e9d41ca3360da7294237da4d8ef33c7\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:50:24.038041 kubelet[2223]: I0317 17:50:24.038038 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e9d41ca3360da7294237da4d8ef33c7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8e9d41ca3360da7294237da4d8ef33c7\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:50:24.038041 kubelet[2223]: I0317 17:50:24.038056 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:24.038497 kubelet[2223]: I0317 17:50:24.038076 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:24.038497 kubelet[2223]: I0317 17:50:24.038094 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:24.038497 kubelet[2223]: I0317 17:50:24.038128 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:24.038497 kubelet[2223]: I0317 17:50:24.038146 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e9d41ca3360da7294237da4d8ef33c7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e9d41ca3360da7294237da4d8ef33c7\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:50:24.038497 kubelet[2223]: I0317 17:50:24.038161 2223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:24.277932 kubelet[2223]: E0317 17:50:24.277859 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:24.278786 containerd[1495]: time="2025-03-17T17:50:24.278717095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8e9d41ca3360da7294237da4d8ef33c7,Namespace:kube-system,Attempt:0,}" Mar 17 17:50:24.283030 kubelet[2223]: E0317 17:50:24.282991 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:24.283518 containerd[1495]: time="2025-03-17T17:50:24.283476719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 17 17:50:24.285796 kubelet[2223]: E0317 17:50:24.285760 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:24.286224 containerd[1495]: time="2025-03-17T17:50:24.286196508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 17 17:50:24.312516 kubelet[2223]: I0317 17:50:24.312467 2223 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:50:24.312835 kubelet[2223]: E0317 17:50:24.312801 2223 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Mar 17 17:50:24.516969 kubelet[2223]: W0317 17:50:24.516924 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:24.516969 kubelet[2223]: E0317 17:50:24.516968 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:24.805194 kubelet[2223]: W0317 17:50:24.805127 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:24.805194 kubelet[2223]: E0317 17:50:24.805184 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:25.229994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635036195.mount: Deactivated successfully. Mar 17 17:50:25.237889 containerd[1495]: time="2025-03-17T17:50:25.237827566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:50:25.242660 containerd[1495]: time="2025-03-17T17:50:25.242598753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:50:25.243629 containerd[1495]: time="2025-03-17T17:50:25.243598596Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:50:25.244783 containerd[1495]: time="2025-03-17T17:50:25.244732398Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:50:25.246164 containerd[1495]: time="2025-03-17T17:50:25.246129719Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:50:25.249307 containerd[1495]: time="2025-03-17T17:50:25.249277810Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:50:25.252049 containerd[1495]: time="2025-03-17T17:50:25.251991251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:50:25.255901 containerd[1495]: time="2025-03-17T17:50:25.255871228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:50:25.257873 containerd[1495]: time="2025-03-17T17:50:25.257199374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 978.332883ms" Mar 17 17:50:25.263423 containerd[1495]: time="2025-03-17T17:50:25.263393809Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 977.122242ms" Mar 17 17:50:25.264215 containerd[1495]: time="2025-03-17T17:50:25.264189447Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 980.58979ms" Mar 17 17:50:25.279873 kubelet[2223]: W0317 17:50:25.279836 2223 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Mar 17 17:50:25.279951 kubelet[2223]: E0317 17:50:25.279877 2223 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:50:25.412208 containerd[1495]: time="2025-03-17T17:50:25.410189296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:25.412208 containerd[1495]: time="2025-03-17T17:50:25.412040902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:25.412208 containerd[1495]: time="2025-03-17T17:50:25.412053108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:25.412208 containerd[1495]: time="2025-03-17T17:50:25.412132576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:25.412804 containerd[1495]: time="2025-03-17T17:50:25.412208886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:25.412804 containerd[1495]: time="2025-03-17T17:50:25.412257075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:25.412804 containerd[1495]: time="2025-03-17T17:50:25.412276598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:25.412804 containerd[1495]: time="2025-03-17T17:50:25.412451438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:25.415683 containerd[1495]: time="2025-03-17T17:50:25.415481086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:25.415683 containerd[1495]: time="2025-03-17T17:50:25.415531208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:25.415683 containerd[1495]: time="2025-03-17T17:50:25.415545069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:25.415866 containerd[1495]: time="2025-03-17T17:50:25.415629827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:25.438160 systemd[1]: Started cri-containerd-303cd58590c010f6dc00310076c41be3fdf07772a04a451122269c7773da5282.scope - libcontainer container 303cd58590c010f6dc00310076c41be3fdf07772a04a451122269c7773da5282. Mar 17 17:50:25.443670 systemd[1]: Started cri-containerd-67461f97ed3a3ec80fcc416054526f50b8b4e6b2e2933a78cc3c25da817282e9.scope - libcontainer container 67461f97ed3a3ec80fcc416054526f50b8b4e6b2e2933a78cc3c25da817282e9. Mar 17 17:50:25.446086 systemd[1]: Started cri-containerd-a3e0de2f603b6f32abfe31c382470ee67c8228b71cd8310f59ecf3255702a321.scope - libcontainer container a3e0de2f603b6f32abfe31c382470ee67c8228b71cd8310f59ecf3255702a321. Mar 17 17:50:25.483834 containerd[1495]: time="2025-03-17T17:50:25.482920876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8e9d41ca3360da7294237da4d8ef33c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"303cd58590c010f6dc00310076c41be3fdf07772a04a451122269c7773da5282\"" Mar 17 17:50:25.486668 kubelet[2223]: E0317 17:50:25.486527 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:25.488709 containerd[1495]: time="2025-03-17T17:50:25.488414767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3e0de2f603b6f32abfe31c382470ee67c8228b71cd8310f59ecf3255702a321\"" Mar 17 17:50:25.491268 kubelet[2223]: E0317 17:50:25.490254 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:25.492812 containerd[1495]: time="2025-03-17T17:50:25.492779105Z" level=info msg="CreateContainer within sandbox \"303cd58590c010f6dc00310076c41be3fdf07772a04a451122269c7773da5282\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:50:25.496975 containerd[1495]: time="2025-03-17T17:50:25.496934686Z" level=info msg="CreateContainer within sandbox \"a3e0de2f603b6f32abfe31c382470ee67c8228b71cd8310f59ecf3255702a321\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:50:25.502836 containerd[1495]: time="2025-03-17T17:50:25.502815011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"67461f97ed3a3ec80fcc416054526f50b8b4e6b2e2933a78cc3c25da817282e9\"" Mar 17 17:50:25.503419 kubelet[2223]: E0317 17:50:25.503399 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:25.504720 containerd[1495]: time="2025-03-17T17:50:25.504679395Z" level=info msg="CreateContainer within sandbox \"67461f97ed3a3ec80fcc416054526f50b8b4e6b2e2933a78cc3c25da817282e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:50:25.649428 containerd[1495]: time="2025-03-17T17:50:25.649356790Z" level=info msg="CreateContainer within sandbox \"a3e0de2f603b6f32abfe31c382470ee67c8228b71cd8310f59ecf3255702a321\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"92628e68856b6b4065fa2a6db94fabede4e257319e232d7536c2b9c15cc9bc1c\"" Mar 17 17:50:25.650243 containerd[1495]: time="2025-03-17T17:50:25.650213884Z" level=info msg="StartContainer for \"92628e68856b6b4065fa2a6db94fabede4e257319e232d7536c2b9c15cc9bc1c\"" Mar 17 17:50:25.654008 containerd[1495]: time="2025-03-17T17:50:25.653946430Z" level=info msg="CreateContainer within sandbox \"303cd58590c010f6dc00310076c41be3fdf07772a04a451122269c7773da5282\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"74bd4e9d55209b394bcae6ae22e947b9f5fd586ea53b8d476fff97355622dc31\"" Mar 17 17:50:25.654627 containerd[1495]: time="2025-03-17T17:50:25.654582993Z" level=info msg="StartContainer for \"74bd4e9d55209b394bcae6ae22e947b9f5fd586ea53b8d476fff97355622dc31\"" Mar 17 17:50:25.660087 containerd[1495]: time="2025-03-17T17:50:25.660042948Z" level=info msg="CreateContainer within sandbox \"67461f97ed3a3ec80fcc416054526f50b8b4e6b2e2933a78cc3c25da817282e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"57d38ceb322f333fff6b4f40d1fc5b849b250fd62242cdf5cef73cf04aab65d0\"" Mar 17 17:50:25.660586 containerd[1495]: time="2025-03-17T17:50:25.660559702Z" level=info msg="StartContainer for \"57d38ceb322f333fff6b4f40d1fc5b849b250fd62242cdf5cef73cf04aab65d0\"" Mar 17 17:50:25.679613 systemd[1]: Started cri-containerd-92628e68856b6b4065fa2a6db94fabede4e257319e232d7536c2b9c15cc9bc1c.scope - libcontainer container 92628e68856b6b4065fa2a6db94fabede4e257319e232d7536c2b9c15cc9bc1c. Mar 17 17:50:25.684155 systemd[1]: Started cri-containerd-74bd4e9d55209b394bcae6ae22e947b9f5fd586ea53b8d476fff97355622dc31.scope - libcontainer container 74bd4e9d55209b394bcae6ae22e947b9f5fd586ea53b8d476fff97355622dc31. Mar 17 17:50:25.688450 systemd[1]: Started cri-containerd-57d38ceb322f333fff6b4f40d1fc5b849b250fd62242cdf5cef73cf04aab65d0.scope - libcontainer container 57d38ceb322f333fff6b4f40d1fc5b849b250fd62242cdf5cef73cf04aab65d0. Mar 17 17:50:25.739447 containerd[1495]: time="2025-03-17T17:50:25.739312351Z" level=info msg="StartContainer for \"92628e68856b6b4065fa2a6db94fabede4e257319e232d7536c2b9c15cc9bc1c\" returns successfully" Mar 17 17:50:25.739447 containerd[1495]: time="2025-03-17T17:50:25.739429042Z" level=info msg="StartContainer for \"74bd4e9d55209b394bcae6ae22e947b9f5fd586ea53b8d476fff97355622dc31\" returns successfully" Mar 17 17:50:25.749825 containerd[1495]: time="2025-03-17T17:50:25.749659442Z" level=info msg="StartContainer for \"57d38ceb322f333fff6b4f40d1fc5b849b250fd62242cdf5cef73cf04aab65d0\" returns successfully" Mar 17 17:50:25.862061 kubelet[2223]: E0317 17:50:25.861447 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:50:25.862061 kubelet[2223]: E0317 17:50:25.861626 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:25.865423 kubelet[2223]: E0317 17:50:25.865390 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:50:25.865520 kubelet[2223]: E0317 17:50:25.865494 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:25.868593 kubelet[2223]: E0317 17:50:25.868551 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:50:25.868757 kubelet[2223]: E0317 17:50:25.868727 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:25.914731 kubelet[2223]: I0317 17:50:25.914683 2223 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:50:26.676911 kubelet[2223]: I0317 17:50:26.676827 2223 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 17 17:50:26.676911 kubelet[2223]: E0317 17:50:26.676903 2223 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 17 17:50:26.682667 kubelet[2223]: E0317 17:50:26.682621 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:26.782996 kubelet[2223]: E0317 17:50:26.782943 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:26.874706 kubelet[2223]: E0317 17:50:26.874503 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:50:26.874706 kubelet[2223]: E0317 17:50:26.874668 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:26.875715 kubelet[2223]: E0317 17:50:26.875565 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:50:26.875715 kubelet[2223]: E0317 17:50:26.875654 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:26.890494 kubelet[2223]: E0317 17:50:26.888488 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:26.991304 kubelet[2223]: E0317 17:50:26.991152 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:26.992617 kubelet[2223]: E0317 17:50:26.992113 2223 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:50:26.992617 kubelet[2223]: E0317 17:50:26.992458 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:27.091666 kubelet[2223]: E0317 17:50:27.091587 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:27.192698 kubelet[2223]: E0317 17:50:27.192615 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:27.293665 kubelet[2223]: E0317 17:50:27.293422 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:27.394806 kubelet[2223]: E0317 17:50:27.394398 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:27.494854 kubelet[2223]: E0317 17:50:27.494781 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:27.597292 kubelet[2223]: E0317 17:50:27.597183 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:27.698939 kubelet[2223]: E0317 17:50:27.698114 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:27.802666 kubelet[2223]: E0317 17:50:27.798852 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:27.900489 kubelet[2223]: E0317 17:50:27.899631 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:28.002330 kubelet[2223]: E0317 17:50:28.000179 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:28.105778 kubelet[2223]: E0317 17:50:28.105707 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:28.206077 kubelet[2223]: E0317 17:50:28.205839 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:28.306976 kubelet[2223]: E0317 17:50:28.306855 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:28.408434 kubelet[2223]: E0317 17:50:28.408126 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:28.510318 kubelet[2223]: E0317 17:50:28.510093 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:28.611724 kubelet[2223]: E0317 17:50:28.610911 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:28.712126 kubelet[2223]: E0317 17:50:28.712058 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:28.812961 kubelet[2223]: E0317 17:50:28.812807 2223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:50:28.917002 kubelet[2223]: I0317 17:50:28.916879 2223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 17:50:29.099030 kubelet[2223]: I0317 17:50:29.098973 2223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:29.205469 kubelet[2223]: I0317 17:50:29.205311 2223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 17:50:29.791212 kubelet[2223]: I0317 17:50:29.790552 2223 apiserver.go:52] "Watching apiserver" Mar 17 17:50:29.794992 kubelet[2223]: E0317 17:50:29.794941 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:29.795982 kubelet[2223]: E0317 17:50:29.795953 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:29.796289 kubelet[2223]: E0317 17:50:29.796260 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:29.817633 kubelet[2223]: I0317 17:50:29.817453 2223 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:50:30.382773 systemd[1]: Reloading requested from client PID 2500 ('systemctl') (unit session-7.scope)... Mar 17 17:50:30.382796 systemd[1]: Reloading... Mar 17 17:50:30.614124 zram_generator::config[2542]: No configuration found. Mar 17 17:50:30.854772 update_engine[1485]: I20250317 17:50:30.853225 1485 update_attempter.cc:509] Updating boot flags... Mar 17 17:50:30.877847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:50:31.044725 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2579) Mar 17 17:50:31.046689 systemd[1]: Reloading finished in 663 ms. Mar 17 17:50:31.103078 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2581) Mar 17 17:50:31.170133 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:31.203153 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2581) Mar 17 17:50:31.208712 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:50:31.209138 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:31.209196 systemd[1]: kubelet.service: Consumed 1.079s CPU time, 128.8M memory peak, 0B memory swap peak. Mar 17 17:50:31.265535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:31.571918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:31.581988 (kubelet)[2599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:50:31.683800 kubelet[2599]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:50:31.683800 kubelet[2599]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:50:31.683800 kubelet[2599]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:50:31.684498 kubelet[2599]: I0317 17:50:31.683875 2599 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:50:31.699338 kubelet[2599]: I0317 17:50:31.699271 2599 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:50:31.699338 kubelet[2599]: I0317 17:50:31.699311 2599 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:50:31.700220 kubelet[2599]: I0317 17:50:31.699637 2599 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:50:31.701410 kubelet[2599]: I0317 17:50:31.701374 2599 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:50:31.726158 kubelet[2599]: I0317 17:50:31.726088 2599 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:50:31.733062 kubelet[2599]: E0317 17:50:31.732315 2599 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:50:31.733062 kubelet[2599]: I0317 17:50:31.732362 2599 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:50:31.739550 kubelet[2599]: I0317 17:50:31.739481 2599 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:50:31.739868 kubelet[2599]: I0317 17:50:31.739805 2599 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:50:31.740133 kubelet[2599]: I0317 17:50:31.739855 2599 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:50:31.740133 kubelet[2599]: I0317 17:50:31.740131 2599 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:50:31.740300 kubelet[2599]: I0317 17:50:31.740145 2599 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:50:31.740300 kubelet[2599]: I0317 17:50:31.740212 2599 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:50:31.740458 kubelet[2599]: I0317 17:50:31.740424 2599 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:50:31.740458 kubelet[2599]: I0317 17:50:31.740448 2599 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:50:31.740527 kubelet[2599]: I0317 17:50:31.740471 2599 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:50:31.740527 kubelet[2599]: I0317 17:50:31.740485 2599 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:50:31.743483 kubelet[2599]: I0317 17:50:31.742772 2599 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:50:31.744341 kubelet[2599]: I0317 17:50:31.744300 2599 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:50:31.756718 kubelet[2599]: I0317 17:50:31.756418 2599 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:50:31.756718 kubelet[2599]: I0317 17:50:31.756477 2599 server.go:1287] "Started kubelet" Mar 17 17:50:31.759249 kubelet[2599]: I0317 17:50:31.758847 2599 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:50:31.769844 kubelet[2599]: I0317 17:50:31.768590 2599 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:50:31.777236 kubelet[2599]: I0317 17:50:31.777148 2599 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:50:31.781921 kubelet[2599]: I0317 17:50:31.772236 2599 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:50:31.782768 kubelet[2599]: I0317 17:50:31.782726 2599 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:50:31.783633 kubelet[2599]: I0317 17:50:31.783588 2599 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:50:31.787493 kubelet[2599]: I0317 17:50:31.786591 2599 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:50:31.787766 kubelet[2599]: I0317 17:50:31.787709 2599 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:50:31.787766 kubelet[2599]: I0317 17:50:31.787768 2599 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:50:31.808937 kubelet[2599]: I0317 17:50:31.805680 2599 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:50:31.808937 kubelet[2599]: I0317 17:50:31.805844 2599 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:50:31.808937 kubelet[2599]: I0317 17:50:31.807041 2599 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:50:31.816062 kubelet[2599]: E0317 17:50:31.814565 2599 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:50:31.838587 kubelet[2599]: I0317 17:50:31.837750 2599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:50:31.845009 kubelet[2599]: I0317 17:50:31.844975 2599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:50:31.845406 kubelet[2599]: I0317 17:50:31.845389 2599 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:50:31.848407 kubelet[2599]: I0317 17:50:31.848330 2599 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:50:31.848407 kubelet[2599]: I0317 17:50:31.848357 2599 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:50:31.848546 kubelet[2599]: E0317 17:50:31.848434 2599 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:50:31.917248 kubelet[2599]: I0317 17:50:31.917117 2599 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:50:31.917248 kubelet[2599]: I0317 17:50:31.917157 2599 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:50:31.917248 kubelet[2599]: I0317 17:50:31.917209 2599 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:50:31.917496 kubelet[2599]: I0317 17:50:31.917471 2599 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:50:31.917529 kubelet[2599]: I0317 17:50:31.917488 2599 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:50:31.917529 kubelet[2599]: I0317 17:50:31.917516 2599 policy_none.go:49] "None policy: Start" Mar 17 17:50:31.917529 kubelet[2599]: I0317 17:50:31.917529 2599 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:50:31.917628 kubelet[2599]: I0317 17:50:31.917545 2599 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:50:31.917734 kubelet[2599]: I0317 17:50:31.917697 2599 state_mem.go:75] "Updated machine memory state" Mar 17 17:50:31.937104 kubelet[2599]: I0317 17:50:31.937064 2599 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:50:31.938340 kubelet[2599]: I0317 17:50:31.937534 2599 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:50:31.938340 kubelet[2599]: I0317 17:50:31.937556 2599 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:50:31.938340 kubelet[2599]: I0317 17:50:31.938219 2599 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:50:31.945599 kubelet[2599]: E0317 17:50:31.945315 2599 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:50:31.953977 kubelet[2599]: I0317 17:50:31.951701 2599 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 17:50:31.953977 kubelet[2599]: I0317 17:50:31.951725 2599 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 17:50:31.953977 kubelet[2599]: I0317 17:50:31.951925 2599 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:31.989405 kubelet[2599]: I0317 17:50:31.989308 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:31.989405 kubelet[2599]: I0317 17:50:31.989369 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:50:31.989405 kubelet[2599]: I0317 17:50:31.989391 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e9d41ca3360da7294237da4d8ef33c7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e9d41ca3360da7294237da4d8ef33c7\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:50:31.989405 kubelet[2599]: I0317 17:50:31.989415 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e9d41ca3360da7294237da4d8ef33c7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e9d41ca3360da7294237da4d8ef33c7\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:50:31.989723 kubelet[2599]: I0317 17:50:31.989444 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e9d41ca3360da7294237da4d8ef33c7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8e9d41ca3360da7294237da4d8ef33c7\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:50:31.989723 kubelet[2599]: I0317 17:50:31.989465 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:31.989723 kubelet[2599]: I0317 17:50:31.989485 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:31.989723 kubelet[2599]: I0317 17:50:31.989504 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:31.989723 kubelet[2599]: I0317 17:50:31.989528 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:32.081168 kubelet[2599]: I0317 17:50:32.080816 2599 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:50:32.091505 kubelet[2599]: E0317 17:50:32.091282 2599 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:50:32.139875 kubelet[2599]: E0317 17:50:32.121738 2599 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:50:32.139875 kubelet[2599]: E0317 17:50:32.122332 2599 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 17 17:50:32.139875 kubelet[2599]: E0317 17:50:32.122573 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:32.139875 kubelet[2599]: E0317 17:50:32.124549 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:32.180465 kubelet[2599]: I0317 17:50:32.178452 2599 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 17 17:50:32.180465 kubelet[2599]: I0317 17:50:32.178637 2599 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 17 17:50:32.395581 kubelet[2599]: E0317 17:50:32.394705 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:32.744388 kubelet[2599]: I0317 17:50:32.744200 2599 apiserver.go:52] "Watching apiserver" Mar 17 17:50:32.792135 kubelet[2599]: I0317 17:50:32.789448 2599 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:50:32.868788 kubelet[2599]: E0317 17:50:32.867160 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:32.868788 kubelet[2599]: E0317 17:50:32.868057 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:32.868788 kubelet[2599]: I0317 17:50:32.868250 2599 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 17:50:32.938680 kubelet[2599]: E0317 17:50:32.938601 2599 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 17 17:50:32.939003 kubelet[2599]: E0317 17:50:32.938959 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:33.142247 kubelet[2599]: I0317 17:50:33.141891 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.131483072 podStartE2EDuration="4.131483072s" podCreationTimestamp="2025-03-17 17:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:50:32.961995426 +0000 UTC m=+1.370441405" watchObservedRunningTime="2025-03-17 17:50:33.131483072 +0000 UTC m=+1.539929040" Mar 17 17:50:33.873555 kubelet[2599]: E0317 17:50:33.872246 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:33.873555 kubelet[2599]: E0317 17:50:33.873035 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:34.521382 kubelet[2599]: I0317 17:50:34.521319 2599 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:50:34.521922 containerd[1495]: time="2025-03-17T17:50:34.521873532Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:50:34.522410 kubelet[2599]: I0317 17:50:34.522131 2599 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:50:34.725153 kubelet[2599]: E0317 17:50:34.724649 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:34.873435 kubelet[2599]: E0317 17:50:34.873398 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:35.443434 systemd[1]: Created slice kubepods-besteffort-poda5d1ab7d_d678_4b39_94d9_352afc3092d7.slice - libcontainer container kubepods-besteffort-poda5d1ab7d_d678_4b39_94d9_352afc3092d7.slice. Mar 17 17:50:35.496647 kubelet[2599]: I0317 17:50:35.496564 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5d1ab7d-d678-4b39-94d9-352afc3092d7-lib-modules\") pod \"kube-proxy-w2l22\" (UID: \"a5d1ab7d-d678-4b39-94d9-352afc3092d7\") " pod="kube-system/kube-proxy-w2l22" Mar 17 17:50:35.496647 kubelet[2599]: I0317 17:50:35.496632 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qg4p\" (UniqueName: \"kubernetes.io/projected/a5d1ab7d-d678-4b39-94d9-352afc3092d7-kube-api-access-6qg4p\") pod \"kube-proxy-w2l22\" (UID: \"a5d1ab7d-d678-4b39-94d9-352afc3092d7\") " pod="kube-system/kube-proxy-w2l22" Mar 17 17:50:35.497401 kubelet[2599]: I0317 17:50:35.496674 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5d1ab7d-d678-4b39-94d9-352afc3092d7-kube-proxy\") pod \"kube-proxy-w2l22\" (UID: \"a5d1ab7d-d678-4b39-94d9-352afc3092d7\") " pod="kube-system/kube-proxy-w2l22" Mar 17 17:50:35.497401 kubelet[2599]: I0317 17:50:35.496703 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5d1ab7d-d678-4b39-94d9-352afc3092d7-xtables-lock\") pod \"kube-proxy-w2l22\" (UID: \"a5d1ab7d-d678-4b39-94d9-352afc3092d7\") " pod="kube-system/kube-proxy-w2l22" Mar 17 17:50:35.763298 kubelet[2599]: E0317 17:50:35.762376 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:35.769301 containerd[1495]: time="2025-03-17T17:50:35.769232747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w2l22,Uid:a5d1ab7d-d678-4b39-94d9-352afc3092d7,Namespace:kube-system,Attempt:0,}" Mar 17 17:50:36.368180 containerd[1495]: time="2025-03-17T17:50:36.367540561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:36.368180 containerd[1495]: time="2025-03-17T17:50:36.367622471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:36.368180 containerd[1495]: time="2025-03-17T17:50:36.367637553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:36.368180 containerd[1495]: time="2025-03-17T17:50:36.367743794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:36.458542 systemd[1]: run-containerd-runc-k8s.io-c147bbeb8d4682541d698012337c7b2f25b5b127e20fbe849f98f5cfdf8622c0-runc.gBEyGt.mount: Deactivated successfully. Mar 17 17:50:36.499653 systemd[1]: Started cri-containerd-c147bbeb8d4682541d698012337c7b2f25b5b127e20fbe849f98f5cfdf8622c0.scope - libcontainer container c147bbeb8d4682541d698012337c7b2f25b5b127e20fbe849f98f5cfdf8622c0. Mar 17 17:50:36.576780 containerd[1495]: time="2025-03-17T17:50:36.576720881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w2l22,Uid:a5d1ab7d-d678-4b39-94d9-352afc3092d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c147bbeb8d4682541d698012337c7b2f25b5b127e20fbe849f98f5cfdf8622c0\"" Mar 17 17:50:36.577958 kubelet[2599]: E0317 17:50:36.577385 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:36.602466 containerd[1495]: time="2025-03-17T17:50:36.602401211Z" level=info msg="CreateContainer within sandbox \"c147bbeb8d4682541d698012337c7b2f25b5b127e20fbe849f98f5cfdf8622c0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:50:37.582859 containerd[1495]: time="2025-03-17T17:50:37.582753181Z" level=info msg="CreateContainer within sandbox \"c147bbeb8d4682541d698012337c7b2f25b5b127e20fbe849f98f5cfdf8622c0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d272bb15415a6647a4ab0d57cab20497116b8408ffc72eeff725fa742ccb165c\"" Mar 17 17:50:37.583958 containerd[1495]: time="2025-03-17T17:50:37.583876350Z" level=info msg="StartContainer for \"d272bb15415a6647a4ab0d57cab20497116b8408ffc72eeff725fa742ccb165c\"" Mar 17 17:50:37.641321 systemd[1]: Started cri-containerd-d272bb15415a6647a4ab0d57cab20497116b8408ffc72eeff725fa742ccb165c.scope - libcontainer container d272bb15415a6647a4ab0d57cab20497116b8408ffc72eeff725fa742ccb165c. Mar 17 17:50:37.780287 containerd[1495]: time="2025-03-17T17:50:37.779141128Z" level=info msg="StartContainer for \"d272bb15415a6647a4ab0d57cab20497116b8408ffc72eeff725fa742ccb165c\" returns successfully" Mar 17 17:50:37.924931 kubelet[2599]: E0317 17:50:37.924881 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:37.993417 kubelet[2599]: I0317 17:50:37.993337 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w2l22" podStartSLOduration=2.993311691 podStartE2EDuration="2.993311691s" podCreationTimestamp="2025-03-17 17:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:50:37.974821466 +0000 UTC m=+6.383267434" watchObservedRunningTime="2025-03-17 17:50:37.993311691 +0000 UTC m=+6.401757660" Mar 17 17:50:38.040002 systemd[1]: Created slice kubepods-besteffort-pod10065107_3d94_4f53_a960_662ebabbbb48.slice - libcontainer container kubepods-besteffort-pod10065107_3d94_4f53_a960_662ebabbbb48.slice. Mar 17 17:50:38.093582 kubelet[2599]: I0317 17:50:38.090667 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10065107-3d94-4f53-a960-662ebabbbb48-var-lib-calico\") pod \"tigera-operator-ccfc44587-49697\" (UID: \"10065107-3d94-4f53-a960-662ebabbbb48\") " pod="tigera-operator/tigera-operator-ccfc44587-49697" Mar 17 17:50:38.093582 kubelet[2599]: I0317 17:50:38.090728 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znkhf\" (UniqueName: \"kubernetes.io/projected/10065107-3d94-4f53-a960-662ebabbbb48-kube-api-access-znkhf\") pod \"tigera-operator-ccfc44587-49697\" (UID: \"10065107-3d94-4f53-a960-662ebabbbb48\") " pod="tigera-operator/tigera-operator-ccfc44587-49697" Mar 17 17:50:38.661055 containerd[1495]: time="2025-03-17T17:50:38.659244803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-ccfc44587-49697,Uid:10065107-3d94-4f53-a960-662ebabbbb48,Namespace:tigera-operator,Attempt:0,}" Mar 17 17:50:38.928547 kubelet[2599]: E0317 17:50:38.927642 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:38.979320 containerd[1495]: time="2025-03-17T17:50:38.979169906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:38.979320 containerd[1495]: time="2025-03-17T17:50:38.979263229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:38.979589 containerd[1495]: time="2025-03-17T17:50:38.979285244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:38.979589 containerd[1495]: time="2025-03-17T17:50:38.979438400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:39.024360 systemd[1]: Started cri-containerd-7dfaf98b026a9537e5be6128b90e772f5dc0345fe23de0babd877ad8ee906a48.scope - libcontainer container 7dfaf98b026a9537e5be6128b90e772f5dc0345fe23de0babd877ad8ee906a48. Mar 17 17:50:39.105751 containerd[1495]: time="2025-03-17T17:50:39.101523041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-ccfc44587-49697,Uid:10065107-3d94-4f53-a960-662ebabbbb48,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7dfaf98b026a9537e5be6128b90e772f5dc0345fe23de0babd877ad8ee906a48\"" Mar 17 17:50:39.109578 containerd[1495]: time="2025-03-17T17:50:39.109190224Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 17 17:50:39.809041 sudo[1674]: pam_unix(sudo:session): session closed for user root Mar 17 17:50:39.826311 sshd[1673]: Connection closed by 10.0.0.1 port 51386 Mar 17 17:50:39.826636 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Mar 17 17:50:39.836524 systemd[1]: sshd@6-10.0.0.104:22-10.0.0.1:51386.service: Deactivated successfully. Mar 17 17:50:39.840944 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:50:39.841309 systemd[1]: session-7.scope: Consumed 5.991s CPU time, 152.9M memory peak, 0B memory swap peak. Mar 17 17:50:39.846049 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:50:39.852420 systemd-logind[1479]: Removed session 7. Mar 17 17:50:41.095087 kubelet[2599]: E0317 17:50:41.092316 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:41.266450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2399430014.mount: Deactivated successfully. Mar 17 17:50:41.970129 kubelet[2599]: E0317 17:50:41.970084 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:42.983330 kubelet[2599]: E0317 17:50:42.981390 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:43.942805 containerd[1495]: time="2025-03-17T17:50:43.942708122Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:43.947784 containerd[1495]: time="2025-03-17T17:50:43.947675133Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 17 17:50:43.949455 containerd[1495]: time="2025-03-17T17:50:43.949400275Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:43.953188 containerd[1495]: time="2025-03-17T17:50:43.953107602Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:43.954628 containerd[1495]: time="2025-03-17T17:50:43.954493735Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 4.845244471s" Mar 17 17:50:43.954628 containerd[1495]: time="2025-03-17T17:50:43.954557966Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 17 17:50:43.959498 containerd[1495]: time="2025-03-17T17:50:43.959429404Z" level=info msg="CreateContainer within sandbox \"7dfaf98b026a9537e5be6128b90e772f5dc0345fe23de0babd877ad8ee906a48\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 17:50:44.004341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163094451.mount: Deactivated successfully. Mar 17 17:50:44.010810 kubelet[2599]: E0317 17:50:44.010401 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:44.023399 containerd[1495]: time="2025-03-17T17:50:44.022714342Z" level=info msg="CreateContainer within sandbox \"7dfaf98b026a9537e5be6128b90e772f5dc0345fe23de0babd877ad8ee906a48\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"07ddba7e5307c084e8ac67d0d8e94c0686cb4ccabfc0e0be246d3029a936554a\"" Mar 17 17:50:44.025031 containerd[1495]: time="2025-03-17T17:50:44.024946751Z" level=info msg="StartContainer for \"07ddba7e5307c084e8ac67d0d8e94c0686cb4ccabfc0e0be246d3029a936554a\"" Mar 17 17:50:44.082499 systemd[1]: Started cri-containerd-07ddba7e5307c084e8ac67d0d8e94c0686cb4ccabfc0e0be246d3029a936554a.scope - libcontainer container 07ddba7e5307c084e8ac67d0d8e94c0686cb4ccabfc0e0be246d3029a936554a. Mar 17 17:50:44.187075 containerd[1495]: time="2025-03-17T17:50:44.186946267Z" level=info msg="StartContainer for \"07ddba7e5307c084e8ac67d0d8e94c0686cb4ccabfc0e0be246d3029a936554a\" returns successfully" Mar 17 17:50:44.751655 kubelet[2599]: E0317 17:50:44.749965 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:48.380183 kubelet[2599]: I0317 17:50:48.380081 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-ccfc44587-49697" podStartSLOduration=6.531596389 podStartE2EDuration="11.380049413s" podCreationTimestamp="2025-03-17 17:50:37 +0000 UTC" firstStartedPulling="2025-03-17 17:50:39.10812886 +0000 UTC m=+7.516574828" lastFinishedPulling="2025-03-17 17:50:43.956581884 +0000 UTC m=+12.365027852" observedRunningTime="2025-03-17 17:50:45.255053513 +0000 UTC m=+13.663499481" watchObservedRunningTime="2025-03-17 17:50:48.380049413 +0000 UTC m=+16.788495401" Mar 17 17:50:48.406242 systemd[1]: Created slice kubepods-besteffort-pod8c18c433_89ed_4b75_b9cc_85feca9304d5.slice - libcontainer container kubepods-besteffort-pod8c18c433_89ed_4b75_b9cc_85feca9304d5.slice. Mar 17 17:50:48.408557 kubelet[2599]: I0317 17:50:48.408501 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c18c433-89ed-4b75-b9cc-85feca9304d5-tigera-ca-bundle\") pod \"calico-typha-5f659d6cf6-zhllt\" (UID: \"8c18c433-89ed-4b75-b9cc-85feca9304d5\") " pod="calico-system/calico-typha-5f659d6cf6-zhllt" Mar 17 17:50:48.408664 kubelet[2599]: I0317 17:50:48.408561 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8c18c433-89ed-4b75-b9cc-85feca9304d5-typha-certs\") pod \"calico-typha-5f659d6cf6-zhllt\" (UID: \"8c18c433-89ed-4b75-b9cc-85feca9304d5\") " pod="calico-system/calico-typha-5f659d6cf6-zhllt" Mar 17 17:50:48.408664 kubelet[2599]: I0317 17:50:48.408590 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg96q\" (UniqueName: \"kubernetes.io/projected/8c18c433-89ed-4b75-b9cc-85feca9304d5-kube-api-access-sg96q\") pod \"calico-typha-5f659d6cf6-zhllt\" (UID: \"8c18c433-89ed-4b75-b9cc-85feca9304d5\") " pod="calico-system/calico-typha-5f659d6cf6-zhllt" Mar 17 17:50:48.704940 systemd[1]: Created slice kubepods-besteffort-pod9da1e230_72ae_4cac_b675_56182d1f2cb4.slice - libcontainer container kubepods-besteffort-pod9da1e230_72ae_4cac_b675_56182d1f2cb4.slice. Mar 17 17:50:48.718230 kubelet[2599]: I0317 17:50:48.717049 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9da1e230-72ae-4cac-b675-56182d1f2cb4-node-certs\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718230 kubelet[2599]: I0317 17:50:48.717156 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9da1e230-72ae-4cac-b675-56182d1f2cb4-policysync\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718230 kubelet[2599]: I0317 17:50:48.717188 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9da1e230-72ae-4cac-b675-56182d1f2cb4-cni-bin-dir\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718230 kubelet[2599]: I0317 17:50:48.717208 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9da1e230-72ae-4cac-b675-56182d1f2cb4-cni-log-dir\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718230 kubelet[2599]: I0317 17:50:48.717228 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9da1e230-72ae-4cac-b675-56182d1f2cb4-var-lib-calico\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718552 kubelet[2599]: I0317 17:50:48.717261 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9da1e230-72ae-4cac-b675-56182d1f2cb4-lib-modules\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718552 kubelet[2599]: I0317 17:50:48.717283 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9da1e230-72ae-4cac-b675-56182d1f2cb4-var-run-calico\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718552 kubelet[2599]: I0317 17:50:48.717304 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9da1e230-72ae-4cac-b675-56182d1f2cb4-cni-net-dir\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718552 kubelet[2599]: I0317 17:50:48.717327 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9da1e230-72ae-4cac-b675-56182d1f2cb4-tigera-ca-bundle\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718552 kubelet[2599]: I0317 17:50:48.717356 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2jtr\" (UniqueName: \"kubernetes.io/projected/9da1e230-72ae-4cac-b675-56182d1f2cb4-kube-api-access-j2jtr\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718733 kubelet[2599]: I0317 17:50:48.717383 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9da1e230-72ae-4cac-b675-56182d1f2cb4-flexvol-driver-host\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.718733 kubelet[2599]: I0317 17:50:48.717403 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9da1e230-72ae-4cac-b675-56182d1f2cb4-xtables-lock\") pod \"calico-node-5xw47\" (UID: \"9da1e230-72ae-4cac-b675-56182d1f2cb4\") " pod="calico-system/calico-node-5xw47" Mar 17 17:50:48.720996 kubelet[2599]: E0317 17:50:48.720103 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:48.725955 containerd[1495]: time="2025-03-17T17:50:48.725868294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f659d6cf6-zhllt,Uid:8c18c433-89ed-4b75-b9cc-85feca9304d5,Namespace:calico-system,Attempt:0,}" Mar 17 17:50:48.806362 containerd[1495]: time="2025-03-17T17:50:48.805946617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:48.806362 containerd[1495]: time="2025-03-17T17:50:48.806078592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:48.806362 containerd[1495]: time="2025-03-17T17:50:48.806093882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:48.806362 containerd[1495]: time="2025-03-17T17:50:48.806210346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:48.856767 kubelet[2599]: E0317 17:50:48.845889 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.856767 kubelet[2599]: W0317 17:50:48.845957 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.856767 kubelet[2599]: E0317 17:50:48.846115 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.856767 kubelet[2599]: E0317 17:50:48.854352 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:50:48.847185 systemd[1]: Started cri-containerd-f915e9643adb0874cbf972e2ce464f53284640ed9d721d7543aee625177112a6.scope - libcontainer container f915e9643adb0874cbf972e2ce464f53284640ed9d721d7543aee625177112a6. Mar 17 17:50:48.883649 kubelet[2599]: E0317 17:50:48.883592 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.883649 kubelet[2599]: W0317 17:50:48.883644 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.883886 kubelet[2599]: E0317 17:50:48.883680 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.917378 kubelet[2599]: E0317 17:50:48.917333 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.917672 kubelet[2599]: W0317 17:50:48.917643 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.917727 kubelet[2599]: E0317 17:50:48.917682 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.918221 kubelet[2599]: E0317 17:50:48.918198 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.918301 kubelet[2599]: W0317 17:50:48.918227 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.918301 kubelet[2599]: E0317 17:50:48.918240 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.918536 kubelet[2599]: E0317 17:50:48.918514 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.918536 kubelet[2599]: W0317 17:50:48.918532 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.918650 kubelet[2599]: E0317 17:50:48.918544 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.918997 kubelet[2599]: E0317 17:50:48.918973 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.919072 kubelet[2599]: W0317 17:50:48.919021 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.919072 kubelet[2599]: E0317 17:50:48.919036 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.919415 kubelet[2599]: E0317 17:50:48.919386 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.919415 kubelet[2599]: W0317 17:50:48.919403 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.919511 kubelet[2599]: E0317 17:50:48.919439 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.919730 kubelet[2599]: E0317 17:50:48.919712 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.919730 kubelet[2599]: W0317 17:50:48.919727 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.919841 kubelet[2599]: E0317 17:50:48.919740 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.919970 kubelet[2599]: E0317 17:50:48.919956 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.919970 kubelet[2599]: W0317 17:50:48.919968 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.920066 kubelet[2599]: E0317 17:50:48.919979 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.920303 kubelet[2599]: E0317 17:50:48.920285 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.920303 kubelet[2599]: W0317 17:50:48.920300 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.920393 kubelet[2599]: E0317 17:50:48.920311 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.923909 kubelet[2599]: E0317 17:50:48.921526 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.923909 kubelet[2599]: W0317 17:50:48.921539 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.923909 kubelet[2599]: E0317 17:50:48.921555 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.925863 kubelet[2599]: E0317 17:50:48.925225 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.925863 kubelet[2599]: W0317 17:50:48.925250 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.925863 kubelet[2599]: E0317 17:50:48.925271 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.925863 kubelet[2599]: E0317 17:50:48.925772 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.925863 kubelet[2599]: W0317 17:50:48.925783 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.925863 kubelet[2599]: E0317 17:50:48.925842 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.926554 kubelet[2599]: E0317 17:50:48.926493 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.926633 kubelet[2599]: W0317 17:50:48.926567 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.926633 kubelet[2599]: E0317 17:50:48.926581 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.931244 kubelet[2599]: E0317 17:50:48.930989 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.931244 kubelet[2599]: W0317 17:50:48.931024 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.931244 kubelet[2599]: E0317 17:50:48.931044 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.931244 kubelet[2599]: E0317 17:50:48.931314 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.931244 kubelet[2599]: W0317 17:50:48.931327 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.931244 kubelet[2599]: E0317 17:50:48.931340 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.932981 kubelet[2599]: E0317 17:50:48.931580 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.932981 kubelet[2599]: W0317 17:50:48.931591 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.932981 kubelet[2599]: E0317 17:50:48.931602 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.932981 kubelet[2599]: E0317 17:50:48.931842 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.932981 kubelet[2599]: W0317 17:50:48.931852 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.932981 kubelet[2599]: E0317 17:50:48.931863 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.932981 kubelet[2599]: E0317 17:50:48.932219 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.932981 kubelet[2599]: W0317 17:50:48.932230 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.932981 kubelet[2599]: E0317 17:50:48.932241 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.932981 kubelet[2599]: E0317 17:50:48.932513 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.935389 kubelet[2599]: W0317 17:50:48.932525 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.935389 kubelet[2599]: E0317 17:50:48.932536 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.935389 kubelet[2599]: E0317 17:50:48.932817 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.935389 kubelet[2599]: W0317 17:50:48.932840 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.935389 kubelet[2599]: E0317 17:50:48.932852 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.935389 kubelet[2599]: E0317 17:50:48.933961 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.935389 kubelet[2599]: W0317 17:50:48.933975 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.935389 kubelet[2599]: E0317 17:50:48.933991 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.935389 kubelet[2599]: E0317 17:50:48.935301 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.935389 kubelet[2599]: W0317 17:50:48.935313 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.935692 kubelet[2599]: E0317 17:50:48.935519 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.936194 kubelet[2599]: I0317 17:50:48.935782 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8eeb7871-e618-4798-a87d-f7b3c9c67c97-socket-dir\") pod \"csi-node-driver-9zh68\" (UID: \"8eeb7871-e618-4798-a87d-f7b3c9c67c97\") " pod="calico-system/csi-node-driver-9zh68" Mar 17 17:50:48.937898 kubelet[2599]: E0317 17:50:48.937875 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.938152 kubelet[2599]: W0317 17:50:48.937989 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.938152 kubelet[2599]: E0317 17:50:48.938031 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.940164 kubelet[2599]: E0317 17:50:48.940143 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.940368 kubelet[2599]: W0317 17:50:48.940244 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.940368 kubelet[2599]: E0317 17:50:48.940283 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.940707 kubelet[2599]: E0317 17:50:48.940695 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.940776 kubelet[2599]: W0317 17:50:48.940765 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.940847 kubelet[2599]: E0317 17:50:48.940826 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.940967 kubelet[2599]: I0317 17:50:48.940927 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8eeb7871-e618-4798-a87d-f7b3c9c67c97-varrun\") pod \"csi-node-driver-9zh68\" (UID: \"8eeb7871-e618-4798-a87d-f7b3c9c67c97\") " pod="calico-system/csi-node-driver-9zh68" Mar 17 17:50:48.942092 kubelet[2599]: E0317 17:50:48.941909 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.942092 kubelet[2599]: W0317 17:50:48.941925 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.942092 kubelet[2599]: E0317 17:50:48.941940 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.942346 kubelet[2599]: E0317 17:50:48.942331 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.942420 kubelet[2599]: W0317 17:50:48.942406 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.942586 kubelet[2599]: E0317 17:50:48.942554 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.943407 kubelet[2599]: E0317 17:50:48.943358 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.943407 kubelet[2599]: W0317 17:50:48.943373 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.943407 kubelet[2599]: E0317 17:50:48.943387 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.943739 kubelet[2599]: I0317 17:50:48.943584 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8eeb7871-e618-4798-a87d-f7b3c9c67c97-registration-dir\") pod \"csi-node-driver-9zh68\" (UID: \"8eeb7871-e618-4798-a87d-f7b3c9c67c97\") " pod="calico-system/csi-node-driver-9zh68" Mar 17 17:50:48.947484 kubelet[2599]: E0317 17:50:48.947436 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.948321 kubelet[2599]: W0317 17:50:48.948174 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.948321 kubelet[2599]: E0317 17:50:48.948226 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.953223 kubelet[2599]: E0317 17:50:48.953086 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.953223 kubelet[2599]: W0317 17:50:48.953121 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.953594 kubelet[2599]: E0317 17:50:48.953443 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.954100 kubelet[2599]: E0317 17:50:48.954083 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.954314 kubelet[2599]: W0317 17:50:48.954175 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.954314 kubelet[2599]: E0317 17:50:48.954194 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.954314 kubelet[2599]: I0317 17:50:48.954235 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k55rw\" (UniqueName: \"kubernetes.io/projected/8eeb7871-e618-4798-a87d-f7b3c9c67c97-kube-api-access-k55rw\") pod \"csi-node-driver-9zh68\" (UID: \"8eeb7871-e618-4798-a87d-f7b3c9c67c97\") " pod="calico-system/csi-node-driver-9zh68" Mar 17 17:50:48.955912 kubelet[2599]: E0317 17:50:48.955574 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.955912 kubelet[2599]: W0317 17:50:48.955599 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.955912 kubelet[2599]: E0317 17:50:48.955624 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.955912 kubelet[2599]: I0317 17:50:48.955649 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8eeb7871-e618-4798-a87d-f7b3c9c67c97-kubelet-dir\") pod \"csi-node-driver-9zh68\" (UID: \"8eeb7871-e618-4798-a87d-f7b3c9c67c97\") " pod="calico-system/csi-node-driver-9zh68" Mar 17 17:50:48.956240 kubelet[2599]: E0317 17:50:48.956194 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.956240 kubelet[2599]: W0317 17:50:48.956212 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.956525 kubelet[2599]: E0317 17:50:48.956493 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.956643 kubelet[2599]: E0317 17:50:48.956613 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.956643 kubelet[2599]: W0317 17:50:48.956631 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.956727 kubelet[2599]: E0317 17:50:48.956648 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.956952 kubelet[2599]: E0317 17:50:48.956931 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.956952 kubelet[2599]: W0317 17:50:48.956946 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.957073 kubelet[2599]: E0317 17:50:48.956958 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.957502 kubelet[2599]: E0317 17:50:48.957479 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:48.957502 kubelet[2599]: W0317 17:50:48.957494 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:48.957581 kubelet[2599]: E0317 17:50:48.957505 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:48.987863 containerd[1495]: time="2025-03-17T17:50:48.987795711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f659d6cf6-zhllt,Uid:8c18c433-89ed-4b75-b9cc-85feca9304d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"f915e9643adb0874cbf972e2ce464f53284640ed9d721d7543aee625177112a6\"" Mar 17 17:50:48.989345 kubelet[2599]: E0317 17:50:48.989298 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:48.990786 containerd[1495]: time="2025-03-17T17:50:48.990749771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 17 17:50:49.013739 kubelet[2599]: E0317 17:50:49.013331 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:49.029751 containerd[1495]: time="2025-03-17T17:50:49.018288994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5xw47,Uid:9da1e230-72ae-4cac-b675-56182d1f2cb4,Namespace:calico-system,Attempt:0,}" Mar 17 17:50:49.057531 kubelet[2599]: E0317 17:50:49.057478 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.057531 kubelet[2599]: W0317 17:50:49.057520 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.057740 kubelet[2599]: E0317 17:50:49.057553 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.058750 kubelet[2599]: E0317 17:50:49.058694 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.058750 kubelet[2599]: W0317 17:50:49.058710 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.058750 kubelet[2599]: E0317 17:50:49.058725 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.063231 kubelet[2599]: E0317 17:50:49.060771 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.063231 kubelet[2599]: W0317 17:50:49.060790 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.063231 kubelet[2599]: E0317 17:50:49.060975 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.064647 kubelet[2599]: E0317 17:50:49.064582 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.064647 kubelet[2599]: W0317 17:50:49.064619 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.064969 kubelet[2599]: E0317 17:50:49.064766 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.066777 kubelet[2599]: E0317 17:50:49.066736 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.066777 kubelet[2599]: W0317 17:50:49.066768 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.066916 kubelet[2599]: E0317 17:50:49.066898 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.067456 kubelet[2599]: E0317 17:50:49.067204 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.067456 kubelet[2599]: W0317 17:50:49.067220 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.067456 kubelet[2599]: E0317 17:50:49.067344 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.069097 kubelet[2599]: E0317 17:50:49.067608 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.069097 kubelet[2599]: W0317 17:50:49.067635 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.069097 kubelet[2599]: E0317 17:50:49.067748 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.072256 kubelet[2599]: E0317 17:50:49.072212 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.072256 kubelet[2599]: W0317 17:50:49.072249 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.073235 kubelet[2599]: E0317 17:50:49.073007 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.074622 kubelet[2599]: E0317 17:50:49.074581 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.074622 kubelet[2599]: W0317 17:50:49.074610 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.074739 kubelet[2599]: E0317 17:50:49.074718 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.076325 kubelet[2599]: E0317 17:50:49.076287 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.076325 kubelet[2599]: W0317 17:50:49.076313 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.076481 kubelet[2599]: E0317 17:50:49.076453 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.076804 kubelet[2599]: E0317 17:50:49.076770 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.076804 kubelet[2599]: W0317 17:50:49.076791 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.077222 kubelet[2599]: E0317 17:50:49.077186 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.080141 kubelet[2599]: E0317 17:50:49.080069 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.080141 kubelet[2599]: W0317 17:50:49.080101 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.080316 kubelet[2599]: E0317 17:50:49.080284 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.084151 kubelet[2599]: E0317 17:50:49.080773 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.084151 kubelet[2599]: W0317 17:50:49.080795 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.084151 kubelet[2599]: E0317 17:50:49.081098 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.084151 kubelet[2599]: E0317 17:50:49.081230 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.084151 kubelet[2599]: W0317 17:50:49.081241 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.084151 kubelet[2599]: E0317 17:50:49.081332 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.090410 kubelet[2599]: E0317 17:50:49.090339 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.090410 kubelet[2599]: W0317 17:50:49.090383 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.090642 kubelet[2599]: E0317 17:50:49.090503 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.091066 kubelet[2599]: E0317 17:50:49.090999 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.091066 kubelet[2599]: W0317 17:50:49.091041 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.091198 kubelet[2599]: E0317 17:50:49.091166 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.091509 kubelet[2599]: E0317 17:50:49.091475 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.091509 kubelet[2599]: W0317 17:50:49.091493 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.091700 kubelet[2599]: E0317 17:50:49.091662 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.094933 kubelet[2599]: E0317 17:50:49.092870 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.094933 kubelet[2599]: W0317 17:50:49.092897 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.094933 kubelet[2599]: E0317 17:50:49.093270 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.095862 kubelet[2599]: E0317 17:50:49.095817 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.095862 kubelet[2599]: W0317 17:50:49.095852 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.096246 kubelet[2599]: E0317 17:50:49.096200 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.096342 kubelet[2599]: E0317 17:50:49.096319 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.096342 kubelet[2599]: W0317 17:50:49.096337 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.096446 kubelet[2599]: E0317 17:50:49.096423 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.096889 kubelet[2599]: E0317 17:50:49.096859 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.096889 kubelet[2599]: W0317 17:50:49.096878 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.097041 kubelet[2599]: E0317 17:50:49.096977 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.097856 kubelet[2599]: E0317 17:50:49.097629 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.097856 kubelet[2599]: W0317 17:50:49.097643 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.097856 kubelet[2599]: E0317 17:50:49.097728 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.098714 kubelet[2599]: E0317 17:50:49.098454 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.098714 kubelet[2599]: W0317 17:50:49.098466 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.098714 kubelet[2599]: E0317 17:50:49.098596 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.110292 kubelet[2599]: E0317 17:50:49.107722 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.110292 kubelet[2599]: W0317 17:50:49.107806 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.110292 kubelet[2599]: E0317 17:50:49.107877 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.123556 kubelet[2599]: E0317 17:50:49.112188 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.123556 kubelet[2599]: W0317 17:50:49.115934 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.123556 kubelet[2599]: E0317 17:50:49.120243 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.131441 containerd[1495]: time="2025-03-17T17:50:49.130888888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:49.131441 containerd[1495]: time="2025-03-17T17:50:49.130982535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:49.131441 containerd[1495]: time="2025-03-17T17:50:49.130998847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:49.131441 containerd[1495]: time="2025-03-17T17:50:49.131177064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:49.172830 kubelet[2599]: E0317 17:50:49.172363 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:49.172830 kubelet[2599]: W0317 17:50:49.172400 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:49.172830 kubelet[2599]: E0317 17:50:49.172436 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:49.212375 systemd[1]: Started cri-containerd-3283539b9c5fd33722a92838485c618f850526ff6b36f4ba80640e273e47bc0f.scope - libcontainer container 3283539b9c5fd33722a92838485c618f850526ff6b36f4ba80640e273e47bc0f. Mar 17 17:50:49.334931 containerd[1495]: time="2025-03-17T17:50:49.333905225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5xw47,Uid:9da1e230-72ae-4cac-b675-56182d1f2cb4,Namespace:calico-system,Attempt:0,} returns sandbox id \"3283539b9c5fd33722a92838485c618f850526ff6b36f4ba80640e273e47bc0f\"" Mar 17 17:50:49.335433 kubelet[2599]: E0317 17:50:49.335374 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:50.849806 kubelet[2599]: E0317 17:50:50.849703 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:50:52.848747 kubelet[2599]: E0317 17:50:52.848657 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:50:54.850971 kubelet[2599]: E0317 17:50:54.849303 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:50:55.125901 containerd[1495]: time="2025-03-17T17:50:55.125672366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:55.126933 containerd[1495]: time="2025-03-17T17:50:55.126790139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 17 17:50:55.134464 containerd[1495]: time="2025-03-17T17:50:55.133050143Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:55.151958 containerd[1495]: time="2025-03-17T17:50:55.149951616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:55.151958 containerd[1495]: time="2025-03-17T17:50:55.151040513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 6.160132745s" Mar 17 17:50:55.151958 containerd[1495]: time="2025-03-17T17:50:55.151071884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 17 17:50:55.164289 containerd[1495]: time="2025-03-17T17:50:55.159611962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:50:55.188724 containerd[1495]: time="2025-03-17T17:50:55.188659624Z" level=info msg="CreateContainer within sandbox \"f915e9643adb0874cbf972e2ce464f53284640ed9d721d7543aee625177112a6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 17:50:55.261588 containerd[1495]: time="2025-03-17T17:50:55.257531904Z" level=info msg="CreateContainer within sandbox \"f915e9643adb0874cbf972e2ce464f53284640ed9d721d7543aee625177112a6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a8fc35a185fc4447b23f93a7d4208f7623626de59fed2d96ba74b31062067a21\"" Mar 17 17:50:55.261588 containerd[1495]: time="2025-03-17T17:50:55.259891065Z" level=info msg="StartContainer for \"a8fc35a185fc4447b23f93a7d4208f7623626de59fed2d96ba74b31062067a21\"" Mar 17 17:50:55.365357 systemd[1]: Started cri-containerd-a8fc35a185fc4447b23f93a7d4208f7623626de59fed2d96ba74b31062067a21.scope - libcontainer container a8fc35a185fc4447b23f93a7d4208f7623626de59fed2d96ba74b31062067a21. Mar 17 17:50:55.506426 containerd[1495]: time="2025-03-17T17:50:55.506243446Z" level=info msg="StartContainer for \"a8fc35a185fc4447b23f93a7d4208f7623626de59fed2d96ba74b31062067a21\" returns successfully" Mar 17 17:50:56.097561 kubelet[2599]: E0317 17:50:56.093749 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:56.160582 kubelet[2599]: I0317 17:50:56.160498 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f659d6cf6-zhllt" podStartSLOduration=1.99185229 podStartE2EDuration="8.16047484s" podCreationTimestamp="2025-03-17 17:50:48 +0000 UTC" firstStartedPulling="2025-03-17 17:50:48.990215631 +0000 UTC m=+17.398661599" lastFinishedPulling="2025-03-17 17:50:55.158838181 +0000 UTC m=+23.567284149" observedRunningTime="2025-03-17 17:50:56.160154616 +0000 UTC m=+24.568600584" watchObservedRunningTime="2025-03-17 17:50:56.16047484 +0000 UTC m=+24.568920818" Mar 17 17:50:56.174565 kubelet[2599]: E0317 17:50:56.174490 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.174565 kubelet[2599]: W0317 17:50:56.174560 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.174790 kubelet[2599]: E0317 17:50:56.174602 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.178488 kubelet[2599]: E0317 17:50:56.174961 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.178488 kubelet[2599]: W0317 17:50:56.177283 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.178836 kubelet[2599]: E0317 17:50:56.178670 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.182869 kubelet[2599]: E0317 17:50:56.182820 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.182869 kubelet[2599]: W0317 17:50:56.182854 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.183078 kubelet[2599]: E0317 17:50:56.182883 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.187496 kubelet[2599]: E0317 17:50:56.185147 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.187496 kubelet[2599]: W0317 17:50:56.185174 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.187496 kubelet[2599]: E0317 17:50:56.185196 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.187496 kubelet[2599]: E0317 17:50:56.185481 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.187496 kubelet[2599]: W0317 17:50:56.185490 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.187496 kubelet[2599]: E0317 17:50:56.185507 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.191657 kubelet[2599]: E0317 17:50:56.191602 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.191657 kubelet[2599]: W0317 17:50:56.191641 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.191861 kubelet[2599]: E0317 17:50:56.191681 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.192263 kubelet[2599]: E0317 17:50:56.192123 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.192263 kubelet[2599]: W0317 17:50:56.192140 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.192263 kubelet[2599]: E0317 17:50:56.192160 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.201543 kubelet[2599]: E0317 17:50:56.199097 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.201543 kubelet[2599]: W0317 17:50:56.199127 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.201543 kubelet[2599]: E0317 17:50:56.199157 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.201543 kubelet[2599]: E0317 17:50:56.199750 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.201543 kubelet[2599]: W0317 17:50:56.199764 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.201543 kubelet[2599]: E0317 17:50:56.199774 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.201543 kubelet[2599]: E0317 17:50:56.199986 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.201543 kubelet[2599]: W0317 17:50:56.199998 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.201543 kubelet[2599]: E0317 17:50:56.200028 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.201543 kubelet[2599]: E0317 17:50:56.200256 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.202143 kubelet[2599]: W0317 17:50:56.200269 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.202143 kubelet[2599]: E0317 17:50:56.200278 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.202143 kubelet[2599]: E0317 17:50:56.200490 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.202143 kubelet[2599]: W0317 17:50:56.200499 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.202143 kubelet[2599]: E0317 17:50:56.200511 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.202143 kubelet[2599]: E0317 17:50:56.200699 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.202143 kubelet[2599]: W0317 17:50:56.200708 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.202143 kubelet[2599]: E0317 17:50:56.200717 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.202143 kubelet[2599]: E0317 17:50:56.201671 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.202143 kubelet[2599]: W0317 17:50:56.201682 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.202447 kubelet[2599]: E0317 17:50:56.201694 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.203647 kubelet[2599]: E0317 17:50:56.203569 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.203647 kubelet[2599]: W0317 17:50:56.203594 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.203647 kubelet[2599]: E0317 17:50:56.203612 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.272634 kubelet[2599]: E0317 17:50:56.271907 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.272634 kubelet[2599]: W0317 17:50:56.272553 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.274950 kubelet[2599]: E0317 17:50:56.272733 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.274950 kubelet[2599]: E0317 17:50:56.273873 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.274950 kubelet[2599]: W0317 17:50:56.273885 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.274950 kubelet[2599]: E0317 17:50:56.273899 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.280537 kubelet[2599]: E0317 17:50:56.277593 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.280537 kubelet[2599]: W0317 17:50:56.277617 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.280537 kubelet[2599]: E0317 17:50:56.277647 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.280537 kubelet[2599]: E0317 17:50:56.278938 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.280537 kubelet[2599]: W0317 17:50:56.278967 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.280537 kubelet[2599]: E0317 17:50:56.279306 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.280537 kubelet[2599]: W0317 17:50:56.279318 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.280537 kubelet[2599]: E0317 17:50:56.279654 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.280537 kubelet[2599]: W0317 17:50:56.279667 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.280537 kubelet[2599]: E0317 17:50:56.279682 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.280537 kubelet[2599]: E0317 17:50:56.280043 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.280999 kubelet[2599]: W0317 17:50:56.280056 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.280999 kubelet[2599]: E0317 17:50:56.280070 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.280999 kubelet[2599]: E0317 17:50:56.280421 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.280999 kubelet[2599]: W0317 17:50:56.280434 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.290941 kubelet[2599]: E0317 17:50:56.281316 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.290941 kubelet[2599]: E0317 17:50:56.281516 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.290941 kubelet[2599]: E0317 17:50:56.281702 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.290941 kubelet[2599]: W0317 17:50:56.281715 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.290941 kubelet[2599]: E0317 17:50:56.281728 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.290941 kubelet[2599]: E0317 17:50:56.281997 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.290941 kubelet[2599]: W0317 17:50:56.282009 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.290941 kubelet[2599]: E0317 17:50:56.282038 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.290941 kubelet[2599]: E0317 17:50:56.282348 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.290941 kubelet[2599]: W0317 17:50:56.282360 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.291436 kubelet[2599]: E0317 17:50:56.282372 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.291436 kubelet[2599]: E0317 17:50:56.282928 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.291436 kubelet[2599]: W0317 17:50:56.282939 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.291436 kubelet[2599]: E0317 17:50:56.282990 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.291436 kubelet[2599]: E0317 17:50:56.283209 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.291436 kubelet[2599]: W0317 17:50:56.283220 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.291436 kubelet[2599]: E0317 17:50:56.283232 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.291436 kubelet[2599]: E0317 17:50:56.283501 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.291436 kubelet[2599]: W0317 17:50:56.283513 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.291436 kubelet[2599]: E0317 17:50:56.283525 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.291775 kubelet[2599]: E0317 17:50:56.283790 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.291775 kubelet[2599]: W0317 17:50:56.283802 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.291775 kubelet[2599]: E0317 17:50:56.283814 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.291775 kubelet[2599]: E0317 17:50:56.284250 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.291775 kubelet[2599]: W0317 17:50:56.284263 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.291775 kubelet[2599]: E0317 17:50:56.284277 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.291775 kubelet[2599]: E0317 17:50:56.280446 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.291775 kubelet[2599]: E0317 17:50:56.289180 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.291775 kubelet[2599]: W0317 17:50:56.289229 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.291775 kubelet[2599]: E0317 17:50:56.289251 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.292119 kubelet[2599]: E0317 17:50:56.290213 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:56.292119 kubelet[2599]: W0317 17:50:56.290226 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:56.292119 kubelet[2599]: E0317 17:50:56.290241 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:56.849309 kubelet[2599]: E0317 17:50:56.849001 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:50:57.107242 kubelet[2599]: I0317 17:50:57.105176 2599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:50:57.107242 kubelet[2599]: E0317 17:50:57.105599 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:57.121665 kubelet[2599]: E0317 17:50:57.119741 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.121665 kubelet[2599]: W0317 17:50:57.119774 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.121665 kubelet[2599]: E0317 17:50:57.119804 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.122346 kubelet[2599]: E0317 17:50:57.122164 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.122346 kubelet[2599]: W0317 17:50:57.122185 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.122346 kubelet[2599]: E0317 17:50:57.122205 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.133723 kubelet[2599]: E0317 17:50:57.133334 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.133723 kubelet[2599]: W0317 17:50:57.133377 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.133723 kubelet[2599]: E0317 17:50:57.133413 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.136030 kubelet[2599]: E0317 17:50:57.135861 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.136030 kubelet[2599]: W0317 17:50:57.135897 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.136030 kubelet[2599]: E0317 17:50:57.135963 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.138368 kubelet[2599]: E0317 17:50:57.138130 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.138368 kubelet[2599]: W0317 17:50:57.138151 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.138368 kubelet[2599]: E0317 17:50:57.138173 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.142278 kubelet[2599]: E0317 17:50:57.141922 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.142278 kubelet[2599]: W0317 17:50:57.141953 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.142278 kubelet[2599]: E0317 17:50:57.141990 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.147085 kubelet[2599]: E0317 17:50:57.146671 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.147085 kubelet[2599]: W0317 17:50:57.146708 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.147085 kubelet[2599]: E0317 17:50:57.146740 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.148842 kubelet[2599]: E0317 17:50:57.147228 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.148842 kubelet[2599]: W0317 17:50:57.147240 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.148842 kubelet[2599]: E0317 17:50:57.147253 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.148842 kubelet[2599]: E0317 17:50:57.147598 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.148842 kubelet[2599]: W0317 17:50:57.147609 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.148842 kubelet[2599]: E0317 17:50:57.147633 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.148842 kubelet[2599]: E0317 17:50:57.147894 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.148842 kubelet[2599]: W0317 17:50:57.147905 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.148842 kubelet[2599]: E0317 17:50:57.147917 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.148842 kubelet[2599]: E0317 17:50:57.148205 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.149778 kubelet[2599]: W0317 17:50:57.148216 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.149778 kubelet[2599]: E0317 17:50:57.148227 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.149778 kubelet[2599]: E0317 17:50:57.148486 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.149778 kubelet[2599]: W0317 17:50:57.148499 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.149778 kubelet[2599]: E0317 17:50:57.148510 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.149778 kubelet[2599]: E0317 17:50:57.149107 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.149778 kubelet[2599]: W0317 17:50:57.149119 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.149778 kubelet[2599]: E0317 17:50:57.149130 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.149778 kubelet[2599]: E0317 17:50:57.149383 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.149778 kubelet[2599]: W0317 17:50:57.149411 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.150159 kubelet[2599]: E0317 17:50:57.149426 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.150685 kubelet[2599]: E0317 17:50:57.150655 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.150685 kubelet[2599]: W0317 17:50:57.150673 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.150778 kubelet[2599]: E0317 17:50:57.150689 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.189459 kubelet[2599]: E0317 17:50:57.189164 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.189459 kubelet[2599]: W0317 17:50:57.189196 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.189459 kubelet[2599]: E0317 17:50:57.189227 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.190788 kubelet[2599]: E0317 17:50:57.190769 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.190875 kubelet[2599]: W0317 17:50:57.190859 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.190995 kubelet[2599]: E0317 17:50:57.190979 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.200337 kubelet[2599]: E0317 17:50:57.200304 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.200455 kubelet[2599]: W0317 17:50:57.200435 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.200797 kubelet[2599]: E0317 17:50:57.200578 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.201847 kubelet[2599]: E0317 17:50:57.201829 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.201952 kubelet[2599]: W0317 17:50:57.201936 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.202189 kubelet[2599]: E0317 17:50:57.202089 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.203966 kubelet[2599]: E0317 17:50:57.203812 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.203966 kubelet[2599]: W0317 17:50:57.203831 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.204146 kubelet[2599]: E0317 17:50:57.204125 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.205999 kubelet[2599]: E0317 17:50:57.205978 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.206861 kubelet[2599]: W0317 17:50:57.206476 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.206861 kubelet[2599]: E0317 17:50:57.206694 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.217227 kubelet[2599]: E0317 17:50:57.217140 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.217227 kubelet[2599]: W0317 17:50:57.217179 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.217582 kubelet[2599]: E0317 17:50:57.217349 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.220565 kubelet[2599]: E0317 17:50:57.220508 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.233892 kubelet[2599]: W0317 17:50:57.233120 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.233892 kubelet[2599]: E0317 17:50:57.233398 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.243100 kubelet[2599]: E0317 17:50:57.237360 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.243100 kubelet[2599]: W0317 17:50:57.237396 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.243100 kubelet[2599]: E0317 17:50:57.237615 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.243100 kubelet[2599]: E0317 17:50:57.237820 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.243100 kubelet[2599]: W0317 17:50:57.237831 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.243100 kubelet[2599]: E0317 17:50:57.237924 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.243100 kubelet[2599]: E0317 17:50:57.238485 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.243100 kubelet[2599]: W0317 17:50:57.238496 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.243100 kubelet[2599]: E0317 17:50:57.238604 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.243100 kubelet[2599]: E0317 17:50:57.238762 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.243683 kubelet[2599]: W0317 17:50:57.238772 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.243683 kubelet[2599]: E0317 17:50:57.238784 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.243683 kubelet[2599]: E0317 17:50:57.240329 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.243683 kubelet[2599]: W0317 17:50:57.240342 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.243683 kubelet[2599]: E0317 17:50:57.240474 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.243683 kubelet[2599]: E0317 17:50:57.241366 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.243683 kubelet[2599]: W0317 17:50:57.241377 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.243683 kubelet[2599]: E0317 17:50:57.241474 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.249414 kubelet[2599]: E0317 17:50:57.247152 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.249414 kubelet[2599]: W0317 17:50:57.247199 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.249414 kubelet[2599]: E0317 17:50:57.247328 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.255520 kubelet[2599]: E0317 17:50:57.250005 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.259803 kubelet[2599]: W0317 17:50:57.256728 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.259803 kubelet[2599]: E0317 17:50:57.256798 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.262598 kubelet[2599]: E0317 17:50:57.262545 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.262598 kubelet[2599]: W0317 17:50:57.262587 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.262784 kubelet[2599]: E0317 17:50:57.262623 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.263398 kubelet[2599]: E0317 17:50:57.263358 2599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:50:57.263398 kubelet[2599]: W0317 17:50:57.263387 2599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:50:57.263490 kubelet[2599]: E0317 17:50:57.263406 2599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:50:57.519452 containerd[1495]: time="2025-03-17T17:50:57.516650942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:57.522808 containerd[1495]: time="2025-03-17T17:50:57.522731820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 17 17:50:57.524532 containerd[1495]: time="2025-03-17T17:50:57.524404172Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:57.529629 containerd[1495]: time="2025-03-17T17:50:57.529529644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:57.531389 containerd[1495]: time="2025-03-17T17:50:57.531079033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 2.371412062s" Mar 17 17:50:57.531389 containerd[1495]: time="2025-03-17T17:50:57.531142268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 17:50:57.542718 containerd[1495]: time="2025-03-17T17:50:57.542127972Z" level=info msg="CreateContainer within sandbox \"3283539b9c5fd33722a92838485c618f850526ff6b36f4ba80640e273e47bc0f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:50:57.578112 containerd[1495]: time="2025-03-17T17:50:57.578001272Z" level=info msg="CreateContainer within sandbox \"3283539b9c5fd33722a92838485c618f850526ff6b36f4ba80640e273e47bc0f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b263f67ee78c54e1aba11a084faf77fb6b533de8fac4ea7fb31490b5f625f2bd\"" Mar 17 17:50:57.580694 containerd[1495]: time="2025-03-17T17:50:57.578854867Z" level=info msg="StartContainer for \"b263f67ee78c54e1aba11a084faf77fb6b533de8fac4ea7fb31490b5f625f2bd\"" Mar 17 17:50:57.664535 systemd[1]: run-containerd-runc-k8s.io-b263f67ee78c54e1aba11a084faf77fb6b533de8fac4ea7fb31490b5f625f2bd-runc.IsfLtV.mount: Deactivated successfully. Mar 17 17:50:57.679775 systemd[1]: Started cri-containerd-b263f67ee78c54e1aba11a084faf77fb6b533de8fac4ea7fb31490b5f625f2bd.scope - libcontainer container b263f67ee78c54e1aba11a084faf77fb6b533de8fac4ea7fb31490b5f625f2bd. Mar 17 17:50:57.766527 containerd[1495]: time="2025-03-17T17:50:57.766444804Z" level=info msg="StartContainer for \"b263f67ee78c54e1aba11a084faf77fb6b533de8fac4ea7fb31490b5f625f2bd\" returns successfully" Mar 17 17:50:57.782935 systemd[1]: cri-containerd-b263f67ee78c54e1aba11a084faf77fb6b533de8fac4ea7fb31490b5f625f2bd.scope: Deactivated successfully. Mar 17 17:50:58.128351 kubelet[2599]: E0317 17:50:58.122745 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:58.284199 containerd[1495]: time="2025-03-17T17:50:58.283154445Z" level=info msg="shim disconnected" id=b263f67ee78c54e1aba11a084faf77fb6b533de8fac4ea7fb31490b5f625f2bd namespace=k8s.io Mar 17 17:50:58.284199 containerd[1495]: time="2025-03-17T17:50:58.283243570Z" level=warning msg="cleaning up after shim disconnected" id=b263f67ee78c54e1aba11a084faf77fb6b533de8fac4ea7fb31490b5f625f2bd namespace=k8s.io Mar 17 17:50:58.284199 containerd[1495]: time="2025-03-17T17:50:58.283257328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:50:58.567994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b263f67ee78c54e1aba11a084faf77fb6b533de8fac4ea7fb31490b5f625f2bd-rootfs.mount: Deactivated successfully. Mar 17 17:50:58.860749 kubelet[2599]: E0317 17:50:58.854758 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:50:59.131919 kubelet[2599]: E0317 17:50:59.128478 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:50:59.141092 containerd[1495]: time="2025-03-17T17:50:59.141028506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:51:00.148906 kernel: hrtimer: interrupt took 14421062 ns Mar 17 17:51:00.851050 kubelet[2599]: E0317 17:51:00.849199 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:02.849244 kubelet[2599]: E0317 17:51:02.849181 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:04.848885 kubelet[2599]: E0317 17:51:04.848802 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:06.850616 kubelet[2599]: E0317 17:51:06.849907 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:08.849574 kubelet[2599]: E0317 17:51:08.849481 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:10.857236 kubelet[2599]: E0317 17:51:10.855999 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:12.738360 containerd[1495]: time="2025-03-17T17:51:12.738224749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:12.748438 containerd[1495]: time="2025-03-17T17:51:12.748332223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 17:51:12.757341 containerd[1495]: time="2025-03-17T17:51:12.757238513Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:12.766076 containerd[1495]: time="2025-03-17T17:51:12.765165962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:12.767452 containerd[1495]: time="2025-03-17T17:51:12.766854346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 13.62577498s" Mar 17 17:51:12.767452 containerd[1495]: time="2025-03-17T17:51:12.766895457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 17:51:12.775991 containerd[1495]: time="2025-03-17T17:51:12.775928544Z" level=info msg="CreateContainer within sandbox \"3283539b9c5fd33722a92838485c618f850526ff6b36f4ba80640e273e47bc0f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:51:12.820218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1244696181.mount: Deactivated successfully. Mar 17 17:51:12.856540 containerd[1495]: time="2025-03-17T17:51:12.845844128Z" level=info msg="CreateContainer within sandbox \"3283539b9c5fd33722a92838485c618f850526ff6b36f4ba80640e273e47bc0f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e30190f3c3fae4d8c995a63012ab58db4f52608d4e90ed90bba5405e86efa7c7\"" Mar 17 17:51:12.856540 containerd[1495]: time="2025-03-17T17:51:12.850439354Z" level=info msg="StartContainer for \"e30190f3c3fae4d8c995a63012ab58db4f52608d4e90ed90bba5405e86efa7c7\"" Mar 17 17:51:12.856748 kubelet[2599]: E0317 17:51:12.850092 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:12.915639 systemd[1]: Started cri-containerd-e30190f3c3fae4d8c995a63012ab58db4f52608d4e90ed90bba5405e86efa7c7.scope - libcontainer container e30190f3c3fae4d8c995a63012ab58db4f52608d4e90ed90bba5405e86efa7c7. Mar 17 17:51:13.243624 containerd[1495]: time="2025-03-17T17:51:13.239767836Z" level=info msg="StartContainer for \"e30190f3c3fae4d8c995a63012ab58db4f52608d4e90ed90bba5405e86efa7c7\" returns successfully" Mar 17 17:51:13.248188 kubelet[2599]: E0317 17:51:13.246887 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:14.250591 kubelet[2599]: E0317 17:51:14.250459 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:14.849979 kubelet[2599]: E0317 17:51:14.849365 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:16.018438 systemd[1]: cri-containerd-e30190f3c3fae4d8c995a63012ab58db4f52608d4e90ed90bba5405e86efa7c7.scope: Deactivated successfully. Mar 17 17:51:16.018749 systemd[1]: cri-containerd-e30190f3c3fae4d8c995a63012ab58db4f52608d4e90ed90bba5405e86efa7c7.scope: Consumed 1.080s CPU time. Mar 17 17:51:16.024495 kubelet[2599]: I0317 17:51:16.023377 2599 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:51:16.045612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e30190f3c3fae4d8c995a63012ab58db4f52608d4e90ed90bba5405e86efa7c7-rootfs.mount: Deactivated successfully. Mar 17 17:51:16.362777 containerd[1495]: time="2025-03-17T17:51:16.361763109Z" level=info msg="shim disconnected" id=e30190f3c3fae4d8c995a63012ab58db4f52608d4e90ed90bba5405e86efa7c7 namespace=k8s.io Mar 17 17:51:16.362777 containerd[1495]: time="2025-03-17T17:51:16.361952378Z" level=warning msg="cleaning up after shim disconnected" id=e30190f3c3fae4d8c995a63012ab58db4f52608d4e90ed90bba5405e86efa7c7 namespace=k8s.io Mar 17 17:51:16.362777 containerd[1495]: time="2025-03-17T17:51:16.361988237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:51:16.663627 systemd[1]: Created slice kubepods-burstable-podb10ce8b2_d481_4335_85f1_af093a79a238.slice - libcontainer container kubepods-burstable-podb10ce8b2_d481_4335_85f1_af093a79a238.slice. Mar 17 17:51:16.669347 systemd[1]: Created slice kubepods-besteffort-pode2616273_669f_41e6_aed5_5c36404c0a1a.slice - libcontainer container kubepods-besteffort-pode2616273_669f_41e6_aed5_5c36404c0a1a.slice. Mar 17 17:51:16.776677 kubelet[2599]: I0317 17:51:16.776573 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsh6r\" (UniqueName: \"kubernetes.io/projected/e2616273-669f-41e6-aed5-5c36404c0a1a-kube-api-access-qsh6r\") pod \"calico-apiserver-db9856-fshz9\" (UID: \"e2616273-669f-41e6-aed5-5c36404c0a1a\") " pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:16.776677 kubelet[2599]: I0317 17:51:16.776657 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2616273-669f-41e6-aed5-5c36404c0a1a-calico-apiserver-certs\") pod \"calico-apiserver-db9856-fshz9\" (UID: \"e2616273-669f-41e6-aed5-5c36404c0a1a\") " pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:16.776999 kubelet[2599]: I0317 17:51:16.776733 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b10ce8b2-d481-4335-85f1-af093a79a238-config-volume\") pod \"coredns-668d6bf9bc-t9ppl\" (UID: \"b10ce8b2-d481-4335-85f1-af093a79a238\") " pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:16.776999 kubelet[2599]: I0317 17:51:16.776782 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5pwk\" (UniqueName: \"kubernetes.io/projected/b10ce8b2-d481-4335-85f1-af093a79a238-kube-api-access-n5pwk\") pod \"coredns-668d6bf9bc-t9ppl\" (UID: \"b10ce8b2-d481-4335-85f1-af093a79a238\") " pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:16.853167 systemd[1]: Created slice kubepods-besteffort-pod4ee3dfd7_d4c4_495e_b4fa_6712bcf8d78e.slice - libcontainer container kubepods-besteffort-pod4ee3dfd7_d4c4_495e_b4fa_6712bcf8d78e.slice. Mar 17 17:51:16.859330 systemd[1]: Created slice kubepods-burstable-pod8c4845cd_7043_485d_9bdd_731020b2270e.slice - libcontainer container kubepods-burstable-pod8c4845cd_7043_485d_9bdd_731020b2270e.slice. Mar 17 17:51:16.865291 systemd[1]: Created slice kubepods-besteffort-pod802f1eaf_7d52_4b00_9fa9_f37418e92a64.slice - libcontainer container kubepods-besteffort-pod802f1eaf_7d52_4b00_9fa9_f37418e92a64.slice. Mar 17 17:51:16.869413 systemd[1]: Created slice kubepods-besteffort-pod8eeb7871_e618_4798_a87d_f7b3c9c67c97.slice - libcontainer container kubepods-besteffort-pod8eeb7871_e618_4798_a87d_f7b3c9c67c97.slice. Mar 17 17:51:16.885966 containerd[1495]: time="2025-03-17T17:51:16.885906078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:0,}" Mar 17 17:51:16.967358 kubelet[2599]: E0317 17:51:16.967196 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:16.968316 containerd[1495]: time="2025-03-17T17:51:16.968236619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:0,}" Mar 17 17:51:16.972593 containerd[1495]: time="2025-03-17T17:51:16.972563018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:51:16.978358 kubelet[2599]: I0317 17:51:16.978303 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpjgj\" (UniqueName: \"kubernetes.io/projected/8c4845cd-7043-485d-9bdd-731020b2270e-kube-api-access-tpjgj\") pod \"coredns-668d6bf9bc-nk8jr\" (UID: \"8c4845cd-7043-485d-9bdd-731020b2270e\") " pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:16.978358 kubelet[2599]: I0317 17:51:16.978349 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c4845cd-7043-485d-9bdd-731020b2270e-config-volume\") pod \"coredns-668d6bf9bc-nk8jr\" (UID: \"8c4845cd-7043-485d-9bdd-731020b2270e\") " pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:16.978358 kubelet[2599]: I0317 17:51:16.978366 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/802f1eaf-7d52-4b00-9fa9-f37418e92a64-calico-apiserver-certs\") pod \"calico-apiserver-db9856-swh96\" (UID: \"802f1eaf-7d52-4b00-9fa9-f37418e92a64\") " pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:16.978626 kubelet[2599]: I0317 17:51:16.978387 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e-tigera-ca-bundle\") pod \"calico-kube-controllers-7d6b67b85-j5xwp\" (UID: \"4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e\") " pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:16.978626 kubelet[2599]: I0317 17:51:16.978406 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvt8\" (UniqueName: \"kubernetes.io/projected/4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e-kube-api-access-chvt8\") pod \"calico-kube-controllers-7d6b67b85-j5xwp\" (UID: \"4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e\") " pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:16.978626 kubelet[2599]: I0317 17:51:16.978424 2599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vftk\" (UniqueName: \"kubernetes.io/projected/802f1eaf-7d52-4b00-9fa9-f37418e92a64-kube-api-access-2vftk\") pod \"calico-apiserver-db9856-swh96\" (UID: \"802f1eaf-7d52-4b00-9fa9-f37418e92a64\") " pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:17.150522 kubelet[2599]: I0317 17:51:17.150473 2599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:51:17.150999 kubelet[2599]: E0317 17:51:17.150905 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:17.158061 containerd[1495]: time="2025-03-17T17:51:17.156673928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:0,}" Mar 17 17:51:17.163240 kubelet[2599]: E0317 17:51:17.163090 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:17.163946 containerd[1495]: time="2025-03-17T17:51:17.163840895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:0,}" Mar 17 17:51:17.169147 containerd[1495]: time="2025-03-17T17:51:17.169088304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:51:17.194782 containerd[1495]: time="2025-03-17T17:51:17.193840017Z" level=error msg="Failed to destroy network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.194782 containerd[1495]: time="2025-03-17T17:51:17.194800667Z" level=error msg="encountered an error cleaning up failed sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.195050 containerd[1495]: time="2025-03-17T17:51:17.194899400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.195298 kubelet[2599]: E0317 17:51:17.195249 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.195365 kubelet[2599]: E0317 17:51:17.195341 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:17.195404 kubelet[2599]: E0317 17:51:17.195375 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:17.195472 kubelet[2599]: E0317 17:51:17.195433 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t9ppl_kube-system(b10ce8b2-d481-4335-85f1-af093a79a238)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t9ppl_kube-system(b10ce8b2-d481-4335-85f1-af093a79a238)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t9ppl" podUID="b10ce8b2-d481-4335-85f1-af093a79a238" Mar 17 17:51:17.209739 containerd[1495]: time="2025-03-17T17:51:17.209652782Z" level=error msg="Failed to destroy network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.210217 containerd[1495]: time="2025-03-17T17:51:17.210179107Z" level=error msg="encountered an error cleaning up failed sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.210292 containerd[1495]: time="2025-03-17T17:51:17.210260285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.210556 kubelet[2599]: E0317 17:51:17.210503 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.210618 kubelet[2599]: E0317 17:51:17.210586 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:17.210660 kubelet[2599]: E0317 17:51:17.210617 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:17.210718 kubelet[2599]: E0317 17:51:17.210682 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-fshz9" podUID="e2616273-669f-41e6-aed5-5c36404c0a1a" Mar 17 17:51:17.217068 containerd[1495]: time="2025-03-17T17:51:17.216345355Z" level=error msg="Failed to destroy network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.217068 containerd[1495]: time="2025-03-17T17:51:17.216787286Z" level=error msg="encountered an error cleaning up failed sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.217068 containerd[1495]: time="2025-03-17T17:51:17.216842624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.217263 kubelet[2599]: E0317 17:51:17.217140 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:17.217263 kubelet[2599]: E0317 17:51:17.217203 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:17.217263 kubelet[2599]: E0317 17:51:17.217225 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:17.217456 kubelet[2599]: E0317 17:51:17.217306 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:17.259094 kubelet[2599]: I0317 17:51:17.259047 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c" Mar 17 17:51:17.283414 kubelet[2599]: E0317 17:51:17.262562 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:17.283414 kubelet[2599]: I0317 17:51:17.263874 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8" Mar 17 17:51:17.283414 kubelet[2599]: E0317 17:51:17.266522 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:17.283414 kubelet[2599]: I0317 17:51:17.267681 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a" Mar 17 17:51:17.283414 kubelet[2599]: E0317 17:51:17.267913 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.259603769Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\"" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.259815160Z" level=info msg="Ensure that sandbox ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c in task-service has been cleanup successfully" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.262179245Z" level=info msg="TearDown network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" successfully" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.262219794Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" returns successfully" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.263198480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:1,}" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.264960782Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\"" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.265263271Z" level=info msg="Ensure that sandbox 3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8 in task-service has been cleanup successfully" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.265464121Z" level=info msg="TearDown network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" successfully" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.265482758Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" returns successfully" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.265980598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:1,}" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.267433256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.268198567Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\"" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.268377976Z" level=info msg="Ensure that sandbox 78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a in task-service has been cleanup successfully" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.268523870Z" level=info msg="TearDown network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" successfully" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.268533479Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" returns successfully" Mar 17 17:51:17.283680 containerd[1495]: time="2025-03-17T17:51:17.268931264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:51:18.061912 systemd[1]: run-netns-cni\x2d45b70fc1\x2ddefe\x2d28a0\x2de3f4\x2d3d65d82550ed.mount: Deactivated successfully. Mar 17 17:51:18.062077 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a-shm.mount: Deactivated successfully. Mar 17 17:51:18.062157 systemd[1]: run-netns-cni\x2dc8cfa21d\x2d3245\x2d600e\x2d707e\x2df7f00f189c4d.mount: Deactivated successfully. Mar 17 17:51:18.062230 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8-shm.mount: Deactivated successfully. Mar 17 17:51:18.062307 systemd[1]: run-netns-cni\x2d1bc85537\x2d521c\x2db485\x2d2835\x2d97e73b7fd442.mount: Deactivated successfully. Mar 17 17:51:18.062402 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c-shm.mount: Deactivated successfully. Mar 17 17:51:18.138527 containerd[1495]: time="2025-03-17T17:51:18.138468334Z" level=error msg="Failed to destroy network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.139690 containerd[1495]: time="2025-03-17T17:51:18.139610498Z" level=error msg="encountered an error cleaning up failed sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.139871 containerd[1495]: time="2025-03-17T17:51:18.139717576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.140080 kubelet[2599]: E0317 17:51:18.140035 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.140163 kubelet[2599]: E0317 17:51:18.140134 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:18.140202 kubelet[2599]: E0317 17:51:18.140168 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:18.140278 kubelet[2599]: E0317 17:51:18.140239 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d6b67b85-j5xwp_calico-system(4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d6b67b85-j5xwp_calico-system(4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" podUID="4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e" Mar 17 17:51:18.145167 containerd[1495]: time="2025-03-17T17:51:18.145114904Z" level=error msg="Failed to destroy network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.145578 containerd[1495]: time="2025-03-17T17:51:18.145547696Z" level=error msg="encountered an error cleaning up failed sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.145635 containerd[1495]: time="2025-03-17T17:51:18.145616310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.145890 kubelet[2599]: E0317 17:51:18.145846 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.145965 kubelet[2599]: E0317 17:51:18.145916 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:18.145965 kubelet[2599]: E0317 17:51:18.145940 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:18.146041 kubelet[2599]: E0317 17:51:18.145997 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nk8jr_kube-system(8c4845cd-7043-485d-9bdd-731020b2270e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nk8jr_kube-system(8c4845cd-7043-485d-9bdd-731020b2270e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nk8jr" podUID="8c4845cd-7043-485d-9bdd-731020b2270e" Mar 17 17:51:18.271383 kubelet[2599]: I0317 17:51:18.271335 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d" Mar 17 17:51:18.272039 containerd[1495]: time="2025-03-17T17:51:18.271978794Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\"" Mar 17 17:51:18.272276 containerd[1495]: time="2025-03-17T17:51:18.272244231Z" level=info msg="Ensure that sandbox c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d in task-service has been cleanup successfully" Mar 17 17:51:18.272619 containerd[1495]: time="2025-03-17T17:51:18.272594533Z" level=info msg="TearDown network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" successfully" Mar 17 17:51:18.272619 containerd[1495]: time="2025-03-17T17:51:18.272614762Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" returns successfully" Mar 17 17:51:18.272888 kubelet[2599]: E0317 17:51:18.272860 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:18.273316 kubelet[2599]: I0317 17:51:18.273291 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf" Mar 17 17:51:18.273412 containerd[1495]: time="2025-03-17T17:51:18.273375062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:1,}" Mar 17 17:51:18.273821 containerd[1495]: time="2025-03-17T17:51:18.273726065Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\"" Mar 17 17:51:18.273999 containerd[1495]: time="2025-03-17T17:51:18.273977164Z" level=info msg="Ensure that sandbox ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf in task-service has been cleanup successfully" Mar 17 17:51:18.274174 containerd[1495]: time="2025-03-17T17:51:18.274153868Z" level=info msg="TearDown network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" successfully" Mar 17 17:51:18.274174 containerd[1495]: time="2025-03-17T17:51:18.274169128Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" returns successfully" Mar 17 17:51:18.274628 containerd[1495]: time="2025-03-17T17:51:18.274589687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:1,}" Mar 17 17:51:18.507552 containerd[1495]: time="2025-03-17T17:51:18.507478928Z" level=error msg="Failed to destroy network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.508075 containerd[1495]: time="2025-03-17T17:51:18.508035894Z" level=error msg="encountered an error cleaning up failed sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.508136 containerd[1495]: time="2025-03-17T17:51:18.508113103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.508647 kubelet[2599]: E0317 17:51:18.508577 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.508759 kubelet[2599]: E0317 17:51:18.508695 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:18.508759 kubelet[2599]: E0317 17:51:18.508732 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:18.509258 kubelet[2599]: E0317 17:51:18.508868 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:18.872460 containerd[1495]: time="2025-03-17T17:51:18.872394452Z" level=error msg="Failed to destroy network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.872919 containerd[1495]: time="2025-03-17T17:51:18.872883734Z" level=error msg="encountered an error cleaning up failed sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.873031 containerd[1495]: time="2025-03-17T17:51:18.872970583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.873339 kubelet[2599]: E0317 17:51:18.873283 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:18.873401 kubelet[2599]: E0317 17:51:18.873373 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:18.873401 kubelet[2599]: E0317 17:51:18.873395 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:18.873471 kubelet[2599]: E0317 17:51:18.873444 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-swh96_calico-apiserver(802f1eaf-7d52-4b00-9fa9-f37418e92a64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-swh96_calico-apiserver(802f1eaf-7d52-4b00-9fa9-f37418e92a64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-swh96" podUID="802f1eaf-7d52-4b00-9fa9-f37418e92a64" Mar 17 17:51:19.061434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005-shm.mount: Deactivated successfully. Mar 17 17:51:19.061564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4-shm.mount: Deactivated successfully. Mar 17 17:51:19.061660 systemd[1]: run-netns-cni\x2d9b9e860d\x2d26a0\x2d127c\x2dd83a\x2dfc09fb59576b.mount: Deactivated successfully. Mar 17 17:51:19.061771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf-shm.mount: Deactivated successfully. Mar 17 17:51:19.061873 systemd[1]: run-netns-cni\x2de48c53e0\x2d98d9\x2d701b\x2df70c\x2de9eb3d64f34f.mount: Deactivated successfully. Mar 17 17:51:19.061968 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d-shm.mount: Deactivated successfully. Mar 17 17:51:19.276128 kubelet[2599]: I0317 17:51:19.275987 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4" Mar 17 17:51:19.276847 containerd[1495]: time="2025-03-17T17:51:19.276813366Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\"" Mar 17 17:51:19.277180 kubelet[2599]: I0317 17:51:19.277153 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005" Mar 17 17:51:19.277734 containerd[1495]: time="2025-03-17T17:51:19.277475815Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\"" Mar 17 17:51:19.277734 containerd[1495]: time="2025-03-17T17:51:19.277695933Z" level=info msg="Ensure that sandbox f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005 in task-service has been cleanup successfully" Mar 17 17:51:19.277819 containerd[1495]: time="2025-03-17T17:51:19.277777663Z" level=info msg="Ensure that sandbox 3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4 in task-service has been cleanup successfully" Mar 17 17:51:19.280760 systemd[1]: run-netns-cni\x2d7d6ef3d9\x2d557e\x2df3e5\x2d7631\x2ddb60d7ae7abc.mount: Deactivated successfully. Mar 17 17:51:19.280893 systemd[1]: run-netns-cni\x2d92f7829f\x2d8515\x2d0be4\x2d2d14\x2dfcda87fecaad.mount: Deactivated successfully. Mar 17 17:51:19.281127 containerd[1495]: time="2025-03-17T17:51:19.281103405Z" level=info msg="TearDown network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" successfully" Mar 17 17:51:19.281127 containerd[1495]: time="2025-03-17T17:51:19.281123815Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" returns successfully" Mar 17 17:51:19.281617 containerd[1495]: time="2025-03-17T17:51:19.281267204Z" level=info msg="TearDown network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" successfully" Mar 17 17:51:19.281617 containerd[1495]: time="2025-03-17T17:51:19.281283616Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" returns successfully" Mar 17 17:51:19.281858 containerd[1495]: time="2025-03-17T17:51:19.281834658Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\"" Mar 17 17:51:19.282439 containerd[1495]: time="2025-03-17T17:51:19.282041851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:51:19.282588 containerd[1495]: time="2025-03-17T17:51:19.282554029Z" level=info msg="TearDown network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" successfully" Mar 17 17:51:19.282588 containerd[1495]: time="2025-03-17T17:51:19.282575600Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" returns successfully" Mar 17 17:51:19.282999 containerd[1495]: time="2025-03-17T17:51:19.282964127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:2,}" Mar 17 17:51:19.411525 containerd[1495]: time="2025-03-17T17:51:19.411460297Z" level=error msg="Failed to destroy network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:19.412006 containerd[1495]: time="2025-03-17T17:51:19.411967904Z" level=error msg="encountered an error cleaning up failed sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:19.412071 containerd[1495]: time="2025-03-17T17:51:19.412052810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:19.412362 kubelet[2599]: E0317 17:51:19.412317 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:19.412439 kubelet[2599]: E0317 17:51:19.412391 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:19.412439 kubelet[2599]: E0317 17:51:19.412416 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:19.412493 kubelet[2599]: E0317 17:51:19.412467 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t9ppl_kube-system(b10ce8b2-d481-4335-85f1-af093a79a238)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t9ppl_kube-system(b10ce8b2-d481-4335-85f1-af093a79a238)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t9ppl" podUID="b10ce8b2-d481-4335-85f1-af093a79a238" Mar 17 17:51:19.547138 containerd[1495]: time="2025-03-17T17:51:19.546967021Z" level=error msg="Failed to destroy network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:19.547527 containerd[1495]: time="2025-03-17T17:51:19.547480080Z" level=error msg="encountered an error cleaning up failed sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:19.547710 containerd[1495]: time="2025-03-17T17:51:19.547557379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:19.547930 kubelet[2599]: E0317 17:51:19.547878 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:19.547991 kubelet[2599]: E0317 17:51:19.547958 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:19.548050 kubelet[2599]: E0317 17:51:19.547981 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:19.548094 kubelet[2599]: E0317 17:51:19.548074 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-fshz9" podUID="e2616273-669f-41e6-aed5-5c36404c0a1a" Mar 17 17:51:20.059553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45-shm.mount: Deactivated successfully. Mar 17 17:51:20.059674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6-shm.mount: Deactivated successfully. Mar 17 17:51:20.280827 kubelet[2599]: I0317 17:51:20.280783 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6" Mar 17 17:51:20.281470 containerd[1495]: time="2025-03-17T17:51:20.281437074Z" level=info msg="StopPodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\"" Mar 17 17:51:20.281713 containerd[1495]: time="2025-03-17T17:51:20.281693002Z" level=info msg="Ensure that sandbox fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6 in task-service has been cleanup successfully" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.281904774Z" level=info msg="TearDown network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" successfully" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.281920144Z" level=info msg="StopPodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" returns successfully" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.282713477Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\"" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.282808983Z" level=info msg="TearDown network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" successfully" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.282819233Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" returns successfully" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.283539384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:2,}" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.284268502Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\"" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.284412223Z" level=info msg="Ensure that sandbox b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45 in task-service has been cleanup successfully" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.284636660Z" level=info msg="TearDown network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" successfully" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.284648432Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" returns successfully" Mar 17 17:51:20.285030 containerd[1495]: time="2025-03-17T17:51:20.284950099Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\"" Mar 17 17:51:20.283976 systemd[1]: run-netns-cni\x2d1ffc29e9\x2d88c4\x2d388f\x2d307a\x2d20ece9b9f8dc.mount: Deactivated successfully. Mar 17 17:51:20.285463 kubelet[2599]: E0317 17:51:20.283053 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:20.285463 kubelet[2599]: I0317 17:51:20.283786 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45" Mar 17 17:51:20.285522 containerd[1495]: time="2025-03-17T17:51:20.285114208Z" level=info msg="TearDown network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" successfully" Mar 17 17:51:20.285522 containerd[1495]: time="2025-03-17T17:51:20.285136131Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" returns successfully" Mar 17 17:51:20.285686 containerd[1495]: time="2025-03-17T17:51:20.285659539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:51:20.287056 systemd[1]: run-netns-cni\x2d5f13184d\x2dd81d\x2d369f\x2d6142\x2d64d1870ed40a.mount: Deactivated successfully. Mar 17 17:51:22.075291 systemd[1]: Started sshd@7-10.0.0.104:22-10.0.0.1:47080.service - OpenSSH per-connection server daemon (10.0.0.1:47080). Mar 17 17:51:22.224058 sshd[3764]: Accepted publickey for core from 10.0.0.1 port 47080 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:51:22.225669 sshd-session[3764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:22.236913 systemd-logind[1479]: New session 8 of user core. Mar 17 17:51:22.242156 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:51:22.295394 containerd[1495]: time="2025-03-17T17:51:22.295321588Z" level=error msg="Failed to destroy network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:22.297366 containerd[1495]: time="2025-03-17T17:51:22.297333283Z" level=error msg="encountered an error cleaning up failed sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:22.297554 containerd[1495]: time="2025-03-17T17:51:22.297484855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:22.297823 kubelet[2599]: E0317 17:51:22.297774 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:22.298199 kubelet[2599]: E0317 17:51:22.297843 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:22.298199 kubelet[2599]: E0317 17:51:22.297868 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:22.298199 kubelet[2599]: E0317 17:51:22.297912 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nk8jr_kube-system(8c4845cd-7043-485d-9bdd-731020b2270e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nk8jr_kube-system(8c4845cd-7043-485d-9bdd-731020b2270e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nk8jr" podUID="8c4845cd-7043-485d-9bdd-731020b2270e" Mar 17 17:51:22.299488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0-shm.mount: Deactivated successfully. Mar 17 17:51:22.513548 sshd[3778]: Connection closed by 10.0.0.1 port 47080 Mar 17 17:51:22.513970 sshd-session[3764]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:22.518858 systemd[1]: sshd@7-10.0.0.104:22-10.0.0.1:47080.service: Deactivated successfully. Mar 17 17:51:22.521274 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:51:22.522501 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:51:22.524075 systemd-logind[1479]: Removed session 8. Mar 17 17:51:23.147086 containerd[1495]: time="2025-03-17T17:51:23.147029594Z" level=error msg="Failed to destroy network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.147459 containerd[1495]: time="2025-03-17T17:51:23.147423819Z" level=error msg="encountered an error cleaning up failed sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.147513 containerd[1495]: time="2025-03-17T17:51:23.147482599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.148218 kubelet[2599]: E0317 17:51:23.148166 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.148307 kubelet[2599]: E0317 17:51:23.148254 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:23.148307 kubelet[2599]: E0317 17:51:23.148281 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:23.148399 kubelet[2599]: E0317 17:51:23.148338 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-swh96_calico-apiserver(802f1eaf-7d52-4b00-9fa9-f37418e92a64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-swh96_calico-apiserver(802f1eaf-7d52-4b00-9fa9-f37418e92a64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-swh96" podUID="802f1eaf-7d52-4b00-9fa9-f37418e92a64" Mar 17 17:51:23.197996 containerd[1495]: time="2025-03-17T17:51:23.197914017Z" level=error msg="Failed to destroy network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.199378 containerd[1495]: time="2025-03-17T17:51:23.199328993Z" level=error msg="encountered an error cleaning up failed sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.199456 containerd[1495]: time="2025-03-17T17:51:23.199393404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.199710 kubelet[2599]: E0317 17:51:23.199636 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.199804 kubelet[2599]: E0317 17:51:23.199721 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:23.199804 kubelet[2599]: E0317 17:51:23.199744 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:23.199804 kubelet[2599]: E0317 17:51:23.199792 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d6b67b85-j5xwp_calico-system(4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d6b67b85-j5xwp_calico-system(4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" podUID="4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e" Mar 17 17:51:23.202353 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033-shm.mount: Deactivated successfully. Mar 17 17:51:23.290672 kubelet[2599]: I0317 17:51:23.290632 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13" Mar 17 17:51:23.291450 containerd[1495]: time="2025-03-17T17:51:23.291283691Z" level=info msg="StopPodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\"" Mar 17 17:51:23.295044 containerd[1495]: time="2025-03-17T17:51:23.291856738Z" level=info msg="Ensure that sandbox c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13 in task-service has been cleanup successfully" Mar 17 17:51:23.295044 containerd[1495]: time="2025-03-17T17:51:23.293275341Z" level=info msg="StopPodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\"" Mar 17 17:51:23.295044 containerd[1495]: time="2025-03-17T17:51:23.293566755Z" level=info msg="Ensure that sandbox 7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033 in task-service has been cleanup successfully" Mar 17 17:51:23.295044 containerd[1495]: time="2025-03-17T17:51:23.294714793Z" level=info msg="StopPodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\"" Mar 17 17:51:23.295044 containerd[1495]: time="2025-03-17T17:51:23.294898065Z" level=info msg="Ensure that sandbox a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0 in task-service has been cleanup successfully" Mar 17 17:51:23.294307 systemd[1]: run-netns-cni\x2dba95365d\x2d1613\x2def08\x2d48bc\x2d9fe5c5b49aee.mount: Deactivated successfully. Mar 17 17:51:23.295351 kubelet[2599]: I0317 17:51:23.292038 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033" Mar 17 17:51:23.295351 kubelet[2599]: I0317 17:51:23.294256 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0" Mar 17 17:51:23.295599 containerd[1495]: time="2025-03-17T17:51:23.295547205Z" level=info msg="TearDown network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" successfully" Mar 17 17:51:23.295599 containerd[1495]: time="2025-03-17T17:51:23.295583552Z" level=info msg="StopPodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" returns successfully" Mar 17 17:51:23.295996 containerd[1495]: time="2025-03-17T17:51:23.295958301Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\"" Mar 17 17:51:23.296124 containerd[1495]: time="2025-03-17T17:51:23.296092701Z" level=info msg="TearDown network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" successfully" Mar 17 17:51:23.296124 containerd[1495]: time="2025-03-17T17:51:23.296118720Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" returns successfully" Mar 17 17:51:23.296490 containerd[1495]: time="2025-03-17T17:51:23.296452662Z" level=info msg="TearDown network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" successfully" Mar 17 17:51:23.296622 containerd[1495]: time="2025-03-17T17:51:23.296502745Z" level=info msg="StopPodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" returns successfully" Mar 17 17:51:23.296957 containerd[1495]: time="2025-03-17T17:51:23.296913912Z" level=info msg="TearDown network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" successfully" Mar 17 17:51:23.296957 containerd[1495]: time="2025-03-17T17:51:23.296942765Z" level=info msg="StopPodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" returns successfully" Mar 17 17:51:23.297195 containerd[1495]: time="2025-03-17T17:51:23.296929420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:2,}" Mar 17 17:51:23.297375 containerd[1495]: time="2025-03-17T17:51:23.297283760Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\"" Mar 17 17:51:23.297462 containerd[1495]: time="2025-03-17T17:51:23.297384518Z" level=info msg="TearDown network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" successfully" Mar 17 17:51:23.297462 containerd[1495]: time="2025-03-17T17:51:23.297398635Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" returns successfully" Mar 17 17:51:23.297608 containerd[1495]: time="2025-03-17T17:51:23.297547953Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\"" Mar 17 17:51:23.297652 containerd[1495]: time="2025-03-17T17:51:23.297632179Z" level=info msg="TearDown network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" successfully" Mar 17 17:51:23.297652 containerd[1495]: time="2025-03-17T17:51:23.297645334Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" returns successfully" Mar 17 17:51:23.298192 kubelet[2599]: E0317 17:51:23.297965 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:23.298613 containerd[1495]: time="2025-03-17T17:51:23.298338155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:2,}" Mar 17 17:51:23.299062 containerd[1495]: time="2025-03-17T17:51:23.298993456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:51:23.299637 systemd[1]: run-netns-cni\x2dcf1b9119\x2d15c1\x2dcd19\x2d9cf0\x2dd98a282a82be.mount: Deactivated successfully. Mar 17 17:51:23.299750 systemd[1]: run-netns-cni\x2d276b5fcc\x2deba4\x2d5b54\x2d1eec\x2d4c7b3ce742d0.mount: Deactivated successfully. Mar 17 17:51:23.688803 containerd[1495]: time="2025-03-17T17:51:23.688750783Z" level=error msg="Failed to destroy network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.689184 containerd[1495]: time="2025-03-17T17:51:23.689150678Z" level=error msg="encountered an error cleaning up failed sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.689225 containerd[1495]: time="2025-03-17T17:51:23.689208556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.689530 kubelet[2599]: E0317 17:51:23.689486 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:23.689598 kubelet[2599]: E0317 17:51:23.689559 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:23.689598 kubelet[2599]: E0317 17:51:23.689581 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:23.689648 kubelet[2599]: E0317 17:51:23.689626 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:24.205453 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e-shm.mount: Deactivated successfully. Mar 17 17:51:24.297480 kubelet[2599]: I0317 17:51:24.297440 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e" Mar 17 17:51:24.298365 containerd[1495]: time="2025-03-17T17:51:24.297976326Z" level=info msg="StopPodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\"" Mar 17 17:51:24.298365 containerd[1495]: time="2025-03-17T17:51:24.298199292Z" level=info msg="Ensure that sandbox b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e in task-service has been cleanup successfully" Mar 17 17:51:24.298860 containerd[1495]: time="2025-03-17T17:51:24.298781388Z" level=info msg="TearDown network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" successfully" Mar 17 17:51:24.298860 containerd[1495]: time="2025-03-17T17:51:24.298798380Z" level=info msg="StopPodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" returns successfully" Mar 17 17:51:24.299180 containerd[1495]: time="2025-03-17T17:51:24.299150827Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\"" Mar 17 17:51:24.299319 containerd[1495]: time="2025-03-17T17:51:24.299257175Z" level=info msg="TearDown network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" successfully" Mar 17 17:51:24.299319 containerd[1495]: time="2025-03-17T17:51:24.299268105Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" returns successfully" Mar 17 17:51:24.300466 containerd[1495]: time="2025-03-17T17:51:24.300434391Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\"" Mar 17 17:51:24.300600 containerd[1495]: time="2025-03-17T17:51:24.300561679Z" level=info msg="TearDown network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" successfully" Mar 17 17:51:24.300600 containerd[1495]: time="2025-03-17T17:51:24.300573129Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" returns successfully" Mar 17 17:51:24.300885 systemd[1]: run-netns-cni\x2d29c2004e\x2d81e4\x2dac96\x2d9cdb\x2d24ea4f5949c3.mount: Deactivated successfully. Mar 17 17:51:24.301067 containerd[1495]: time="2025-03-17T17:51:24.301027998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:3,}" Mar 17 17:51:24.623027 containerd[1495]: time="2025-03-17T17:51:24.622945416Z" level=error msg="Failed to destroy network for sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:24.623468 containerd[1495]: time="2025-03-17T17:51:24.623424780Z" level=error msg="encountered an error cleaning up failed sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:24.623527 containerd[1495]: time="2025-03-17T17:51:24.623503578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:24.623829 kubelet[2599]: E0317 17:51:24.623769 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:24.623829 kubelet[2599]: E0317 17:51:24.623839 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:24.624337 kubelet[2599]: E0317 17:51:24.623864 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:24.624337 kubelet[2599]: E0317 17:51:24.623916 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t9ppl_kube-system(b10ce8b2-d481-4335-85f1-af093a79a238)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t9ppl_kube-system(b10ce8b2-d481-4335-85f1-af093a79a238)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t9ppl" podUID="b10ce8b2-d481-4335-85f1-af093a79a238" Mar 17 17:51:24.625550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e-shm.mount: Deactivated successfully. Mar 17 17:51:24.860720 containerd[1495]: time="2025-03-17T17:51:24.860618131Z" level=error msg="Failed to destroy network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:24.861274 containerd[1495]: time="2025-03-17T17:51:24.861210557Z" level=error msg="encountered an error cleaning up failed sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:24.861372 containerd[1495]: time="2025-03-17T17:51:24.861334117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:24.861723 kubelet[2599]: E0317 17:51:24.861661 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:24.861804 kubelet[2599]: E0317 17:51:24.861751 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:24.861804 kubelet[2599]: E0317 17:51:24.861781 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:24.861897 kubelet[2599]: E0317 17:51:24.861853 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-fshz9" podUID="e2616273-669f-41e6-aed5-5c36404c0a1a" Mar 17 17:51:24.864119 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822-shm.mount: Deactivated successfully. Mar 17 17:51:25.070694 containerd[1495]: time="2025-03-17T17:51:25.070544708Z" level=error msg="Failed to destroy network for sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.071309 containerd[1495]: time="2025-03-17T17:51:25.071281354Z" level=error msg="encountered an error cleaning up failed sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.071442 containerd[1495]: time="2025-03-17T17:51:25.071417909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.072239 kubelet[2599]: E0317 17:51:25.071770 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.072239 kubelet[2599]: E0317 17:51:25.071847 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:25.072239 kubelet[2599]: E0317 17:51:25.071867 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:25.072358 kubelet[2599]: E0317 17:51:25.071922 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nk8jr_kube-system(8c4845cd-7043-485d-9bdd-731020b2270e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nk8jr_kube-system(8c4845cd-7043-485d-9bdd-731020b2270e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nk8jr" podUID="8c4845cd-7043-485d-9bdd-731020b2270e" Mar 17 17:51:25.080456 containerd[1495]: time="2025-03-17T17:51:25.080391675Z" level=error msg="Failed to destroy network for sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.081051 containerd[1495]: time="2025-03-17T17:51:25.080971298Z" level=error msg="encountered an error cleaning up failed sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.081376 containerd[1495]: time="2025-03-17T17:51:25.081314919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.081769 kubelet[2599]: E0317 17:51:25.081724 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.081901 kubelet[2599]: E0317 17:51:25.081801 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:25.081901 kubelet[2599]: E0317 17:51:25.081826 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:25.081959 kubelet[2599]: E0317 17:51:25.081889 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d6b67b85-j5xwp_calico-system(4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d6b67b85-j5xwp_calico-system(4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" podUID="4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e" Mar 17 17:51:25.095267 containerd[1495]: time="2025-03-17T17:51:25.095195378Z" level=error msg="Failed to destroy network for sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.095775 containerd[1495]: time="2025-03-17T17:51:25.095737951Z" level=error msg="encountered an error cleaning up failed sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.095956 containerd[1495]: time="2025-03-17T17:51:25.095829292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.096177 containerd[1495]: time="2025-03-17T17:51:25.095996494Z" level=error msg="Failed to destroy network for sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.096221 kubelet[2599]: E0317 17:51:25.096153 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.096297 kubelet[2599]: E0317 17:51:25.096245 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:25.096297 kubelet[2599]: E0317 17:51:25.096267 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:25.096365 kubelet[2599]: E0317 17:51:25.096311 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:25.096456 containerd[1495]: time="2025-03-17T17:51:25.096439360Z" level=error msg="encountered an error cleaning up failed sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.097082 containerd[1495]: time="2025-03-17T17:51:25.096494103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.097269 kubelet[2599]: E0317 17:51:25.096687 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:25.097269 kubelet[2599]: E0317 17:51:25.096721 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:25.097269 kubelet[2599]: E0317 17:51:25.096735 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:25.097370 kubelet[2599]: E0317 17:51:25.096765 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-swh96_calico-apiserver(802f1eaf-7d52-4b00-9fa9-f37418e92a64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-swh96_calico-apiserver(802f1eaf-7d52-4b00-9fa9-f37418e92a64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-swh96" podUID="802f1eaf-7d52-4b00-9fa9-f37418e92a64" Mar 17 17:51:25.203442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12-shm.mount: Deactivated successfully. Mar 17 17:51:25.313236 kubelet[2599]: I0317 17:51:25.309261 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e" Mar 17 17:51:25.313370 containerd[1495]: time="2025-03-17T17:51:25.310220803Z" level=info msg="StopPodSandbox for \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\"" Mar 17 17:51:25.313370 containerd[1495]: time="2025-03-17T17:51:25.310502930Z" level=info msg="Ensure that sandbox c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e in task-service has been cleanup successfully" Mar 17 17:51:25.313370 containerd[1495]: time="2025-03-17T17:51:25.311257879Z" level=info msg="TearDown network for sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\" successfully" Mar 17 17:51:25.313370 containerd[1495]: time="2025-03-17T17:51:25.311276755Z" level=info msg="StopPodSandbox for \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\" returns successfully" Mar 17 17:51:25.313714 systemd[1]: run-netns-cni\x2dbc808d00\x2d6219\x2dcde2\x2d36e0\x2d4bf197d7f768.mount: Deactivated successfully. Mar 17 17:51:25.315045 containerd[1495]: time="2025-03-17T17:51:25.314909508Z" level=info msg="StopPodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\"" Mar 17 17:51:25.315257 containerd[1495]: time="2025-03-17T17:51:25.315221200Z" level=info msg="TearDown network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" successfully" Mar 17 17:51:25.315373 containerd[1495]: time="2025-03-17T17:51:25.315265412Z" level=info msg="StopPodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" returns successfully" Mar 17 17:51:25.317395 containerd[1495]: time="2025-03-17T17:51:25.317211737Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\"" Mar 17 17:51:25.317395 containerd[1495]: time="2025-03-17T17:51:25.317317865Z" level=info msg="TearDown network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" successfully" Mar 17 17:51:25.317395 containerd[1495]: time="2025-03-17T17:51:25.317330278Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" returns successfully" Mar 17 17:51:25.317667 kubelet[2599]: E0317 17:51:25.317635 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:25.317987 containerd[1495]: time="2025-03-17T17:51:25.317907716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:3,}" Mar 17 17:51:25.318482 kubelet[2599]: I0317 17:51:25.318467 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1" Mar 17 17:51:25.319277 containerd[1495]: time="2025-03-17T17:51:25.319244673Z" level=info msg="StopPodSandbox for \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\"" Mar 17 17:51:25.324042 kubelet[2599]: I0317 17:51:25.320072 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889" Mar 17 17:51:25.327054 containerd[1495]: time="2025-03-17T17:51:25.320388308Z" level=info msg="StopPodSandbox for \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\"" Mar 17 17:51:25.327054 containerd[1495]: time="2025-03-17T17:51:25.320587921Z" level=info msg="Ensure that sandbox f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889 in task-service has been cleanup successfully" Mar 17 17:51:25.327054 containerd[1495]: time="2025-03-17T17:51:25.321839608Z" level=info msg="Ensure that sandbox 8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1 in task-service has been cleanup successfully" Mar 17 17:51:25.327054 containerd[1495]: time="2025-03-17T17:51:25.323287993Z" level=info msg="TearDown network for sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\" successfully" Mar 17 17:51:25.327054 containerd[1495]: time="2025-03-17T17:51:25.323304313Z" level=info msg="StopPodSandbox for \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\" returns successfully" Mar 17 17:51:25.327054 containerd[1495]: time="2025-03-17T17:51:25.323703469Z" level=info msg="StopPodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\"" Mar 17 17:51:25.327054 containerd[1495]: time="2025-03-17T17:51:25.323791032Z" level=info msg="TearDown network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" successfully" Mar 17 17:51:25.327054 containerd[1495]: time="2025-03-17T17:51:25.323800780Z" level=info msg="StopPodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" returns successfully" Mar 17 17:51:25.323118 systemd[1]: run-netns-cni\x2dc8435d97\x2d37a3\x2d05a4\x2dc62b\x2dfcd1c8ddabf0.mount: Deactivated successfully. Mar 17 17:51:25.329425 containerd[1495]: time="2025-03-17T17:51:25.329204821Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\"" Mar 17 17:51:25.329425 containerd[1495]: time="2025-03-17T17:51:25.329275153Z" level=info msg="TearDown network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" successfully" Mar 17 17:51:25.329425 containerd[1495]: time="2025-03-17T17:51:25.329284601Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" returns successfully" Mar 17 17:51:25.329597 kubelet[2599]: E0317 17:51:25.329490 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:25.329726 containerd[1495]: time="2025-03-17T17:51:25.329698382Z" level=info msg="TearDown network for sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\" successfully" Mar 17 17:51:25.329794 containerd[1495]: time="2025-03-17T17:51:25.329724251Z" level=info msg="StopPodSandbox for \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\" returns successfully" Mar 17 17:51:25.329911 containerd[1495]: time="2025-03-17T17:51:25.329883759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:3,}" Mar 17 17:51:25.330896 systemd[1]: run-netns-cni\x2d8ee2ba6f\x2d878b\x2dfdc9\x2d99ff\x2d509602519ec8.mount: Deactivated successfully. Mar 17 17:51:25.333375 containerd[1495]: time="2025-03-17T17:51:25.333237812Z" level=info msg="StopPodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\"" Mar 17 17:51:25.333375 containerd[1495]: time="2025-03-17T17:51:25.333369007Z" level=info msg="TearDown network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" successfully" Mar 17 17:51:25.333456 containerd[1495]: time="2025-03-17T17:51:25.333380078Z" level=info msg="StopPodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" returns successfully" Mar 17 17:51:25.333480 kubelet[2599]: I0317 17:51:25.333426 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12" Mar 17 17:51:25.333866 containerd[1495]: time="2025-03-17T17:51:25.333836990Z" level=info msg="StopPodSandbox for \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\"" Mar 17 17:51:25.334090 containerd[1495]: time="2025-03-17T17:51:25.334070708Z" level=info msg="Ensure that sandbox 25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12 in task-service has been cleanup successfully" Mar 17 17:51:25.335071 containerd[1495]: time="2025-03-17T17:51:25.334391897Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\"" Mar 17 17:51:25.335071 containerd[1495]: time="2025-03-17T17:51:25.334479029Z" level=info msg="TearDown network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" successfully" Mar 17 17:51:25.335071 containerd[1495]: time="2025-03-17T17:51:25.334490861Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" returns successfully" Mar 17 17:51:25.336693 systemd[1]: run-netns-cni\x2d27b0097c\x2d1fd4\x2dffbb\x2dec6f\x2dfe0370e972d7.mount: Deactivated successfully. Mar 17 17:51:25.337675 containerd[1495]: time="2025-03-17T17:51:25.337645252Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\"" Mar 17 17:51:25.337759 containerd[1495]: time="2025-03-17T17:51:25.337742423Z" level=info msg="TearDown network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" successfully" Mar 17 17:51:25.337786 containerd[1495]: time="2025-03-17T17:51:25.337757712Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" returns successfully" Mar 17 17:51:25.338106 containerd[1495]: time="2025-03-17T17:51:25.337819718Z" level=info msg="TearDown network for sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\" successfully" Mar 17 17:51:25.338106 containerd[1495]: time="2025-03-17T17:51:25.337834606Z" level=info msg="StopPodSandbox for \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\" returns successfully" Mar 17 17:51:25.338243 containerd[1495]: time="2025-03-17T17:51:25.338220717Z" level=info msg="StopPodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\"" Mar 17 17:51:25.338335 containerd[1495]: time="2025-03-17T17:51:25.338304804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:4,}" Mar 17 17:51:25.338580 containerd[1495]: time="2025-03-17T17:51:25.338312658Z" level=info msg="TearDown network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" successfully" Mar 17 17:51:25.338635 containerd[1495]: time="2025-03-17T17:51:25.338566793Z" level=info msg="StopPodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" returns successfully" Mar 17 17:51:25.339024 containerd[1495]: time="2025-03-17T17:51:25.338991586Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\"" Mar 17 17:51:25.339158 containerd[1495]: time="2025-03-17T17:51:25.339115508Z" level=info msg="TearDown network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" successfully" Mar 17 17:51:25.339158 containerd[1495]: time="2025-03-17T17:51:25.339133591Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" returns successfully" Mar 17 17:51:25.339860 kubelet[2599]: I0317 17:51:25.339505 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1" Mar 17 17:51:25.340159 containerd[1495]: time="2025-03-17T17:51:25.340128970Z" level=info msg="StopPodSandbox for \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\"" Mar 17 17:51:25.340360 containerd[1495]: time="2025-03-17T17:51:25.340210452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:3,}" Mar 17 17:51:25.340476 containerd[1495]: time="2025-03-17T17:51:25.340457563Z" level=info msg="Ensure that sandbox 20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1 in task-service has been cleanup successfully" Mar 17 17:51:25.342158 kubelet[2599]: I0317 17:51:25.342141 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822" Mar 17 17:51:25.342537 containerd[1495]: time="2025-03-17T17:51:25.342517670Z" level=info msg="StopPodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\"" Mar 17 17:51:25.342804 containerd[1495]: time="2025-03-17T17:51:25.342542516Z" level=info msg="TearDown network for sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\" successfully" Mar 17 17:51:25.342804 containerd[1495]: time="2025-03-17T17:51:25.342749313Z" level=info msg="StopPodSandbox for \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\" returns successfully" Mar 17 17:51:25.342905 containerd[1495]: time="2025-03-17T17:51:25.342887260Z" level=info msg="Ensure that sandbox 2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822 in task-service has been cleanup successfully" Mar 17 17:51:25.343122 containerd[1495]: time="2025-03-17T17:51:25.343095439Z" level=info msg="TearDown network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" successfully" Mar 17 17:51:25.343161 containerd[1495]: time="2025-03-17T17:51:25.343136125Z" level=info msg="StopPodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" returns successfully" Mar 17 17:51:25.343194 containerd[1495]: time="2025-03-17T17:51:25.343113944Z" level=info msg="StopPodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\"" Mar 17 17:51:25.343618 containerd[1495]: time="2025-03-17T17:51:25.343283239Z" level=info msg="TearDown network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" successfully" Mar 17 17:51:25.343618 containerd[1495]: time="2025-03-17T17:51:25.343296935Z" level=info msg="StopPodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" returns successfully" Mar 17 17:51:25.343618 containerd[1495]: time="2025-03-17T17:51:25.343439752Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\"" Mar 17 17:51:25.343618 containerd[1495]: time="2025-03-17T17:51:25.343531493Z" level=info msg="TearDown network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" successfully" Mar 17 17:51:25.343618 containerd[1495]: time="2025-03-17T17:51:25.343545759Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" returns successfully" Mar 17 17:51:25.343618 containerd[1495]: time="2025-03-17T17:51:25.343562471Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\"" Mar 17 17:51:25.343914 containerd[1495]: time="2025-03-17T17:51:25.343885915Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\"" Mar 17 17:51:25.344005 containerd[1495]: time="2025-03-17T17:51:25.343986102Z" level=info msg="TearDown network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" successfully" Mar 17 17:51:25.344005 containerd[1495]: time="2025-03-17T17:51:25.344001581Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" returns successfully" Mar 17 17:51:25.344413 containerd[1495]: time="2025-03-17T17:51:25.344383154Z" level=info msg="TearDown network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" successfully" Mar 17 17:51:25.344413 containerd[1495]: time="2025-03-17T17:51:25.344402690Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" returns successfully" Mar 17 17:51:25.348922 containerd[1495]: time="2025-03-17T17:51:25.348895800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:51:25.349275 containerd[1495]: time="2025-03-17T17:51:25.349228471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:51:26.201888 systemd[1]: run-netns-cni\x2dc2db64c5\x2d5fd8\x2d46b6\x2d49d9\x2d3e599d4bad87.mount: Deactivated successfully. Mar 17 17:51:26.202007 systemd[1]: run-netns-cni\x2daff4a397\x2dd050\x2d82a4\x2d5fa9\x2d004a28f1857f.mount: Deactivated successfully. Mar 17 17:51:27.238366 containerd[1495]: time="2025-03-17T17:51:27.238093596Z" level=error msg="Failed to destroy network for sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.241927 containerd[1495]: time="2025-03-17T17:51:27.241860094Z" level=error msg="encountered an error cleaning up failed sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.242170 containerd[1495]: time="2025-03-17T17:51:27.241968807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.242486 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a-shm.mount: Deactivated successfully. Mar 17 17:51:27.244115 kubelet[2599]: E0317 17:51:27.243763 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.244115 kubelet[2599]: E0317 17:51:27.243846 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:27.244115 kubelet[2599]: E0317 17:51:27.243879 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:27.244655 kubelet[2599]: E0317 17:51:27.243937 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nk8jr_kube-system(8c4845cd-7043-485d-9bdd-731020b2270e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nk8jr_kube-system(8c4845cd-7043-485d-9bdd-731020b2270e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nk8jr" podUID="8c4845cd-7043-485d-9bdd-731020b2270e" Mar 17 17:51:27.258054 containerd[1495]: time="2025-03-17T17:51:27.255530522Z" level=error msg="Failed to destroy network for sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.258595 containerd[1495]: time="2025-03-17T17:51:27.258506169Z" level=error msg="encountered an error cleaning up failed sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.258764 containerd[1495]: time="2025-03-17T17:51:27.258609463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.258929 kubelet[2599]: E0317 17:51:27.258874 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.259008 kubelet[2599]: E0317 17:51:27.258961 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:27.259008 kubelet[2599]: E0317 17:51:27.258995 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:27.259203 kubelet[2599]: E0317 17:51:27.259128 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d6b67b85-j5xwp_calico-system(4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d6b67b85-j5xwp_calico-system(4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" podUID="4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e" Mar 17 17:51:27.261056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9-shm.mount: Deactivated successfully. Mar 17 17:51:27.268395 containerd[1495]: time="2025-03-17T17:51:27.268334549Z" level=error msg="Failed to destroy network for sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.276047 containerd[1495]: time="2025-03-17T17:51:27.275971949Z" level=error msg="Failed to destroy network for sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.276686 containerd[1495]: time="2025-03-17T17:51:27.276658734Z" level=error msg="encountered an error cleaning up failed sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.278318 containerd[1495]: time="2025-03-17T17:51:27.278282432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.281439 kubelet[2599]: E0317 17:51:27.279911 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.281439 kubelet[2599]: E0317 17:51:27.279997 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:27.281439 kubelet[2599]: E0317 17:51:27.280047 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:27.281147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e-shm.mount: Deactivated successfully. Mar 17 17:51:27.281749 kubelet[2599]: E0317 17:51:27.280101 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-swh96_calico-apiserver(802f1eaf-7d52-4b00-9fa9-f37418e92a64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-swh96_calico-apiserver(802f1eaf-7d52-4b00-9fa9-f37418e92a64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-swh96" podUID="802f1eaf-7d52-4b00-9fa9-f37418e92a64" Mar 17 17:51:27.287947 containerd[1495]: time="2025-03-17T17:51:27.287258317Z" level=error msg="encountered an error cleaning up failed sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.287947 containerd[1495]: time="2025-03-17T17:51:27.287367400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.288252 kubelet[2599]: E0317 17:51:27.287583 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.288252 kubelet[2599]: E0317 17:51:27.287637 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:27.288252 kubelet[2599]: E0317 17:51:27.287660 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:27.288763 kubelet[2599]: E0317 17:51:27.287696 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t9ppl_kube-system(b10ce8b2-d481-4335-85f1-af093a79a238)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t9ppl_kube-system(b10ce8b2-d481-4335-85f1-af093a79a238)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t9ppl" podUID="b10ce8b2-d481-4335-85f1-af093a79a238" Mar 17 17:51:27.291005 containerd[1495]: time="2025-03-17T17:51:27.290863491Z" level=error msg="Failed to destroy network for sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.291390 containerd[1495]: time="2025-03-17T17:51:27.291367945Z" level=error msg="encountered an error cleaning up failed sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.292203 containerd[1495]: time="2025-03-17T17:51:27.291763005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.292392 kubelet[2599]: E0317 17:51:27.291992 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.292392 kubelet[2599]: E0317 17:51:27.292140 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:27.292392 kubelet[2599]: E0317 17:51:27.292161 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:27.291869 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be-shm.mount: Deactivated successfully. Mar 17 17:51:27.292801 kubelet[2599]: E0317 17:51:27.292763 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-fshz9" podUID="e2616273-669f-41e6-aed5-5c36404c0a1a" Mar 17 17:51:27.313248 containerd[1495]: time="2025-03-17T17:51:27.313176500Z" level=error msg="Failed to destroy network for sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.313840 containerd[1495]: time="2025-03-17T17:51:27.313798304Z" level=error msg="encountered an error cleaning up failed sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.313930 containerd[1495]: time="2025-03-17T17:51:27.313882471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.314253 kubelet[2599]: E0317 17:51:27.314205 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:27.314338 kubelet[2599]: E0317 17:51:27.314282 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:27.314338 kubelet[2599]: E0317 17:51:27.314301 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:27.314411 kubelet[2599]: E0317 17:51:27.314355 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:27.356333 kubelet[2599]: I0317 17:51:27.355657 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0" Mar 17 17:51:27.356525 containerd[1495]: time="2025-03-17T17:51:27.356413619Z" level=info msg="StopPodSandbox for \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\"" Mar 17 17:51:27.356672 containerd[1495]: time="2025-03-17T17:51:27.356636045Z" level=info msg="Ensure that sandbox fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0 in task-service has been cleanup successfully" Mar 17 17:51:27.357097 containerd[1495]: time="2025-03-17T17:51:27.357066832Z" level=info msg="TearDown network for sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\" successfully" Mar 17 17:51:27.357097 containerd[1495]: time="2025-03-17T17:51:27.357091788Z" level=info msg="StopPodSandbox for \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\" returns successfully" Mar 17 17:51:27.357759 containerd[1495]: time="2025-03-17T17:51:27.357575113Z" level=info msg="StopPodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\"" Mar 17 17:51:27.357759 containerd[1495]: time="2025-03-17T17:51:27.357691550Z" level=info msg="TearDown network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" successfully" Mar 17 17:51:27.357759 containerd[1495]: time="2025-03-17T17:51:27.357702180Z" level=info msg="StopPodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" returns successfully" Mar 17 17:51:27.357989 containerd[1495]: time="2025-03-17T17:51:27.357963299Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\"" Mar 17 17:51:27.358101 containerd[1495]: time="2025-03-17T17:51:27.358078465Z" level=info msg="TearDown network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" successfully" Mar 17 17:51:27.358101 containerd[1495]: time="2025-03-17T17:51:27.358093492Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" returns successfully" Mar 17 17:51:27.358633 containerd[1495]: time="2025-03-17T17:51:27.358609117Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\"" Mar 17 17:51:27.358744 containerd[1495]: time="2025-03-17T17:51:27.358721387Z" level=info msg="TearDown network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" successfully" Mar 17 17:51:27.358744 containerd[1495]: time="2025-03-17T17:51:27.358738019Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" returns successfully" Mar 17 17:51:27.359218 containerd[1495]: time="2025-03-17T17:51:27.359197608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:51:27.360227 kubelet[2599]: I0317 17:51:27.359798 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a" Mar 17 17:51:27.360362 containerd[1495]: time="2025-03-17T17:51:27.360298098Z" level=info msg="StopPodSandbox for \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\"" Mar 17 17:51:27.360500 containerd[1495]: time="2025-03-17T17:51:27.360468847Z" level=info msg="Ensure that sandbox 611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a in task-service has been cleanup successfully" Mar 17 17:51:27.360735 containerd[1495]: time="2025-03-17T17:51:27.360687576Z" level=info msg="TearDown network for sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\" successfully" Mar 17 17:51:27.360735 containerd[1495]: time="2025-03-17T17:51:27.360699929Z" level=info msg="StopPodSandbox for \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.361048692Z" level=info msg="StopPodSandbox for \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.361123962Z" level=info msg="TearDown network for sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.361136125Z" level=info msg="StopPodSandbox for \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.361283692Z" level=info msg="StopPodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.361349505Z" level=info msg="TearDown network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.361357891Z" level=info msg="StopPodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.361500547Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.361574706Z" level=info msg="TearDown network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.361583112Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.377813860Z" level=info msg="StopPodSandbox for \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.378068296Z" level=info msg="Ensure that sandbox 64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9 in task-service has been cleanup successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.378564895Z" level=info msg="TearDown network for sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.378590493Z" level=info msg="StopPodSandbox for \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.379110957Z" level=info msg="StopPodSandbox for \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.379191939Z" level=info msg="TearDown network for sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.379202759Z" level=info msg="StopPodSandbox for \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.379422099Z" level=info msg="StopPodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.379525894Z" level=info msg="TearDown network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.379596195Z" level=info msg="StopPodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.379842676Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.379910593Z" level=info msg="TearDown network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.379918859Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.380565720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:4,}" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.380610152Z" level=info msg="StopPodSandbox for \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.380806129Z" level=info msg="Ensure that sandbox 4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e in task-service has been cleanup successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.380989051Z" level=info msg="TearDown network for sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.381001174Z" level=info msg="StopPodSandbox for \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.381332133Z" level=info msg="StopPodSandbox for \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.381435416Z" level=info msg="TearDown network for sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.381448230Z" level=info msg="StopPodSandbox for \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.381693499Z" level=info msg="StopPodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.381767548Z" level=info msg="TearDown network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.381776254Z" level=info msg="StopPodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.382153339Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\"" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.382234842Z" level=info msg="TearDown network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.382244931Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" returns successfully" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.382647975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:51:27.409179 containerd[1495]: time="2025-03-17T17:51:27.383065397Z" level=info msg="StopPodSandbox for \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\"" Mar 17 17:51:27.410515 kubelet[2599]: I0317 17:51:27.377294 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9" Mar 17 17:51:27.410515 kubelet[2599]: I0317 17:51:27.380271 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e" Mar 17 17:51:27.410515 kubelet[2599]: I0317 17:51:27.382341 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be" Mar 17 17:51:27.410515 kubelet[2599]: E0317 17:51:27.384993 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:27.410515 kubelet[2599]: I0317 17:51:27.385139 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25" Mar 17 17:51:27.410515 kubelet[2599]: E0317 17:51:27.402476 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.383222781Z" level=info msg="Ensure that sandbox 7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be in task-service has been cleanup successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.383484310Z" level=info msg="TearDown network for sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\" successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.383495832Z" level=info msg="StopPodSandbox for \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\" returns successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.383807014Z" level=info msg="StopPodSandbox for \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\"" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.383950322Z" level=info msg="TearDown network for sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\" successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.383964148Z" level=info msg="StopPodSandbox for \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\" returns successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.384403830Z" level=info msg="StopPodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\"" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.384471116Z" level=info msg="TearDown network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.384479803Z" level=info msg="StopPodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" returns successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.384770146Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\"" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.384845407Z" level=info msg="TearDown network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.384854995Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" returns successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.385300579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:4,}" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.385643891Z" level=info msg="StopPodSandbox for \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\"" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.385807667Z" level=info msg="Ensure that sandbox bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25 in task-service has been cleanup successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.385981312Z" level=info msg="TearDown network for sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\" successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.385995298Z" level=info msg="StopPodSandbox for \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\" returns successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.386290951Z" level=info msg="StopPodSandbox for \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\"" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.386406146Z" level=info msg="TearDown network for sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\" successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.386422938Z" level=info msg="StopPodSandbox for \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\" returns successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.386669730Z" level=info msg="StopPodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\"" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.386751222Z" level=info msg="TearDown network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.386760680Z" level=info msg="StopPodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" returns successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.387050212Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\"" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.387144529Z" level=info msg="TearDown network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.387155779Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" returns successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.387398735Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\"" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.387472542Z" level=info msg="TearDown network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.387481208Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" returns successfully" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.387802770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:5,}" Mar 17 17:51:27.410741 containerd[1495]: time="2025-03-17T17:51:27.402921839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:4,}" Mar 17 17:51:27.537445 systemd[1]: Started sshd@8-10.0.0.104:22-10.0.0.1:57194.service - OpenSSH per-connection server daemon (10.0.0.1:57194). Mar 17 17:51:27.585381 sshd[4389]: Accepted publickey for core from 10.0.0.1 port 57194 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:51:27.586970 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:27.593237 systemd-logind[1479]: New session 9 of user core. Mar 17 17:51:27.600149 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:51:27.764902 sshd[4391]: Connection closed by 10.0.0.1 port 57194 Mar 17 17:51:27.765332 sshd-session[4389]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:27.770131 systemd[1]: sshd@8-10.0.0.104:22-10.0.0.1:57194.service: Deactivated successfully. Mar 17 17:51:27.773177 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:51:27.775157 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:51:27.776203 systemd-logind[1479]: Removed session 9. Mar 17 17:51:28.202026 systemd[1]: run-netns-cni\x2d17667555\x2d5100\x2d7e1b\x2dfeb0\x2dcc61f31a8143.mount: Deactivated successfully. Mar 17 17:51:28.202137 systemd[1]: run-netns-cni\x2da0e0eefe\x2d5199\x2d8ec3\x2d88f7\x2dfbebfdece2df.mount: Deactivated successfully. Mar 17 17:51:28.202213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0-shm.mount: Deactivated successfully. Mar 17 17:51:28.202289 systemd[1]: run-netns-cni\x2d060bf19a\x2d103d\x2d71f1\x2d66b2\x2d1dcdc7dc50d2.mount: Deactivated successfully. Mar 17 17:51:28.202355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25-shm.mount: Deactivated successfully. Mar 17 17:51:28.202428 systemd[1]: run-netns-cni\x2d28edc8ba\x2d342c\x2d767d\x2dab43\x2d6c7a10fc96f8.mount: Deactivated successfully. Mar 17 17:51:28.202500 systemd[1]: run-netns-cni\x2d535c7284\x2d0e60\x2d1d5d\x2db032\x2d5fa61ad07b38.mount: Deactivated successfully. Mar 17 17:51:28.202590 systemd[1]: run-netns-cni\x2d6dcd2d15\x2d59c0\x2db88c\x2d4922\x2d458d822acd40.mount: Deactivated successfully. Mar 17 17:51:29.207983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388262437.mount: Deactivated successfully. Mar 17 17:51:29.290948 containerd[1495]: time="2025-03-17T17:51:29.290827335Z" level=error msg="Failed to destroy network for sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.291685 containerd[1495]: time="2025-03-17T17:51:29.291621964Z" level=error msg="encountered an error cleaning up failed sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.291755 containerd[1495]: time="2025-03-17T17:51:29.291722071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.292634 kubelet[2599]: E0317 17:51:29.292087 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.292634 kubelet[2599]: E0317 17:51:29.292180 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:29.292634 kubelet[2599]: E0317 17:51:29.292212 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:29.293105 kubelet[2599]: E0317 17:51:29.292301 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-fshz9" podUID="e2616273-669f-41e6-aed5-5c36404c0a1a" Mar 17 17:51:29.294668 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73-shm.mount: Deactivated successfully. Mar 17 17:51:29.334452 containerd[1495]: time="2025-03-17T17:51:29.333251472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:29.336226 containerd[1495]: time="2025-03-17T17:51:29.336034719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 17:51:29.341806 containerd[1495]: time="2025-03-17T17:51:29.341767406Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:29.349702 containerd[1495]: time="2025-03-17T17:51:29.349656835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:29.351173 containerd[1495]: time="2025-03-17T17:51:29.350743392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 12.083278495s" Mar 17 17:51:29.351524 containerd[1495]: time="2025-03-17T17:51:29.351357884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 17:51:29.364403 containerd[1495]: time="2025-03-17T17:51:29.364351902Z" level=info msg="CreateContainer within sandbox \"3283539b9c5fd33722a92838485c618f850526ff6b36f4ba80640e273e47bc0f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:51:29.403133 kubelet[2599]: I0317 17:51:29.402386 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73" Mar 17 17:51:29.403383 containerd[1495]: time="2025-03-17T17:51:29.403353915Z" level=info msg="StopPodSandbox for \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\"" Mar 17 17:51:29.403764 containerd[1495]: time="2025-03-17T17:51:29.403745289Z" level=info msg="Ensure that sandbox d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73 in task-service has been cleanup successfully" Mar 17 17:51:29.404104 containerd[1495]: time="2025-03-17T17:51:29.404086679Z" level=info msg="TearDown network for sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\" successfully" Mar 17 17:51:29.404193 containerd[1495]: time="2025-03-17T17:51:29.404180124Z" level=info msg="StopPodSandbox for \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\" returns successfully" Mar 17 17:51:29.406166 containerd[1495]: time="2025-03-17T17:51:29.406145238Z" level=info msg="StopPodSandbox for \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\"" Mar 17 17:51:29.406337 containerd[1495]: time="2025-03-17T17:51:29.406322260Z" level=info msg="TearDown network for sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\" successfully" Mar 17 17:51:29.406390 containerd[1495]: time="2025-03-17T17:51:29.406377643Z" level=info msg="StopPodSandbox for \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\" returns successfully" Mar 17 17:51:29.406988 containerd[1495]: time="2025-03-17T17:51:29.406926502Z" level=info msg="StopPodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\"" Mar 17 17:51:29.408095 containerd[1495]: time="2025-03-17T17:51:29.407112450Z" level=info msg="TearDown network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" successfully" Mar 17 17:51:29.408095 containerd[1495]: time="2025-03-17T17:51:29.407147285Z" level=info msg="StopPodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" returns successfully" Mar 17 17:51:29.408095 containerd[1495]: time="2025-03-17T17:51:29.407489848Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\"" Mar 17 17:51:29.408095 containerd[1495]: time="2025-03-17T17:51:29.407577272Z" level=info msg="TearDown network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" successfully" Mar 17 17:51:29.408095 containerd[1495]: time="2025-03-17T17:51:29.407587531Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" returns successfully" Mar 17 17:51:29.408621 containerd[1495]: time="2025-03-17T17:51:29.408447774Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\"" Mar 17 17:51:29.408621 containerd[1495]: time="2025-03-17T17:51:29.408557961Z" level=info msg="TearDown network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" successfully" Mar 17 17:51:29.408621 containerd[1495]: time="2025-03-17T17:51:29.408568049Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" returns successfully" Mar 17 17:51:29.409131 containerd[1495]: time="2025-03-17T17:51:29.409110105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:51:29.409504 containerd[1495]: time="2025-03-17T17:51:29.409385201Z" level=error msg="Failed to destroy network for sandbox \"8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.410267 containerd[1495]: time="2025-03-17T17:51:29.410238851Z" level=error msg="encountered an error cleaning up failed sandbox \"8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.410756 containerd[1495]: time="2025-03-17T17:51:29.410716988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.411093 kubelet[2599]: E0317 17:51:29.411064 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.411535 kubelet[2599]: E0317 17:51:29.411297 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:29.411535 kubelet[2599]: E0317 17:51:29.411322 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9zh68" Mar 17 17:51:29.411535 kubelet[2599]: E0317 17:51:29.411357 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9zh68_calico-system(8eeb7871-e618-4798-a87d-f7b3c9c67c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9zh68" podUID="8eeb7871-e618-4798-a87d-f7b3c9c67c97" Mar 17 17:51:29.439632 containerd[1495]: time="2025-03-17T17:51:29.439569464Z" level=error msg="Failed to destroy network for sandbox \"4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.440082 containerd[1495]: time="2025-03-17T17:51:29.439985092Z" level=error msg="encountered an error cleaning up failed sandbox \"4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.440082 containerd[1495]: time="2025-03-17T17:51:29.440056306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.440361 kubelet[2599]: E0317 17:51:29.440301 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.440486 kubelet[2599]: E0317 17:51:29.440379 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:29.440486 kubelet[2599]: E0317 17:51:29.440402 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9ppl" Mar 17 17:51:29.440486 kubelet[2599]: E0317 17:51:29.440446 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t9ppl_kube-system(b10ce8b2-d481-4335-85f1-af093a79a238)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t9ppl_kube-system(b10ce8b2-d481-4335-85f1-af093a79a238)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t9ppl" podUID="b10ce8b2-d481-4335-85f1-af093a79a238" Mar 17 17:51:29.449454 containerd[1495]: time="2025-03-17T17:51:29.448890637Z" level=error msg="Failed to destroy network for sandbox \"ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.449633 containerd[1495]: time="2025-03-17T17:51:29.449478109Z" level=error msg="encountered an error cleaning up failed sandbox \"ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.449633 containerd[1495]: time="2025-03-17T17:51:29.449549763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.449876 containerd[1495]: time="2025-03-17T17:51:29.449764825Z" level=error msg="Failed to destroy network for sandbox \"122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.450327 kubelet[2599]: E0317 17:51:29.450260 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.450547 kubelet[2599]: E0317 17:51:29.450348 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:29.450547 kubelet[2599]: E0317 17:51:29.450377 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nk8jr" Mar 17 17:51:29.450547 kubelet[2599]: E0317 17:51:29.450437 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nk8jr_kube-system(8c4845cd-7043-485d-9bdd-731020b2270e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nk8jr_kube-system(8c4845cd-7043-485d-9bdd-731020b2270e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nk8jr" podUID="8c4845cd-7043-485d-9bdd-731020b2270e" Mar 17 17:51:29.450709 containerd[1495]: time="2025-03-17T17:51:29.450681965Z" level=error msg="encountered an error cleaning up failed sandbox \"122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.450778 containerd[1495]: time="2025-03-17T17:51:29.450726138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.450892 kubelet[2599]: E0317 17:51:29.450866 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.450935 kubelet[2599]: E0317 17:51:29.450897 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:29.450935 kubelet[2599]: E0317 17:51:29.450913 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-swh96" Mar 17 17:51:29.451054 kubelet[2599]: E0317 17:51:29.450936 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-swh96_calico-apiserver(802f1eaf-7d52-4b00-9fa9-f37418e92a64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-swh96_calico-apiserver(802f1eaf-7d52-4b00-9fa9-f37418e92a64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-swh96" podUID="802f1eaf-7d52-4b00-9fa9-f37418e92a64" Mar 17 17:51:29.458579 containerd[1495]: time="2025-03-17T17:51:29.458459986Z" level=error msg="Failed to destroy network for sandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.459005 containerd[1495]: time="2025-03-17T17:51:29.458956156Z" level=error msg="encountered an error cleaning up failed sandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.459168 containerd[1495]: time="2025-03-17T17:51:29.459046766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.459370 kubelet[2599]: E0317 17:51:29.459327 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.459452 kubelet[2599]: E0317 17:51:29.459401 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:29.459452 kubelet[2599]: E0317 17:51:29.459432 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" Mar 17 17:51:29.459522 kubelet[2599]: E0317 17:51:29.459493 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d6b67b85-j5xwp_calico-system(4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d6b67b85-j5xwp_calico-system(4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" podUID="4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e" Mar 17 17:51:29.747447 containerd[1495]: time="2025-03-17T17:51:29.747284020Z" level=info msg="CreateContainer within sandbox \"3283539b9c5fd33722a92838485c618f850526ff6b36f4ba80640e273e47bc0f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e6118cbc259d2bb9b0fd192967380b20605a569c3fcac7f3254849e7dfc76f2f\"" Mar 17 17:51:29.748141 containerd[1495]: time="2025-03-17T17:51:29.748075845Z" level=info msg="StartContainer for \"e6118cbc259d2bb9b0fd192967380b20605a569c3fcac7f3254849e7dfc76f2f\"" Mar 17 17:51:29.824247 containerd[1495]: time="2025-03-17T17:51:29.824170640Z" level=error msg="Failed to destroy network for sandbox \"49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.824711 containerd[1495]: time="2025-03-17T17:51:29.824653275Z" level=error msg="encountered an error cleaning up failed sandbox \"49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.824769 containerd[1495]: time="2025-03-17T17:51:29.824737263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.825868 kubelet[2599]: E0317 17:51:29.825259 2599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:51:29.825868 kubelet[2599]: E0317 17:51:29.825342 2599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:29.825868 kubelet[2599]: E0317 17:51:29.825371 2599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-db9856-fshz9" Mar 17 17:51:29.826680 kubelet[2599]: E0317 17:51:29.825438 2599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-db9856-fshz9_calico-apiserver(e2616273-669f-41e6-aed5-5c36404c0a1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-db9856-fshz9" podUID="e2616273-669f-41e6-aed5-5c36404c0a1a" Mar 17 17:51:29.842235 systemd[1]: Started cri-containerd-e6118cbc259d2bb9b0fd192967380b20605a569c3fcac7f3254849e7dfc76f2f.scope - libcontainer container e6118cbc259d2bb9b0fd192967380b20605a569c3fcac7f3254849e7dfc76f2f. Mar 17 17:51:29.879221 containerd[1495]: time="2025-03-17T17:51:29.879158670Z" level=info msg="StartContainer for \"e6118cbc259d2bb9b0fd192967380b20605a569c3fcac7f3254849e7dfc76f2f\" returns successfully" Mar 17 17:51:30.003408 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:51:30.003626 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:51:30.214376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2-shm.mount: Deactivated successfully. Mar 17 17:51:30.214952 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0-shm.mount: Deactivated successfully. Mar 17 17:51:30.215184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c-shm.mount: Deactivated successfully. Mar 17 17:51:30.215437 systemd[1]: run-netns-cni\x2da9e1b6e5\x2decdc\x2de3e3\x2df15e\x2dcfe824612110.mount: Deactivated successfully. Mar 17 17:51:30.408976 kubelet[2599]: I0317 17:51:30.408941 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492" Mar 17 17:51:30.409813 containerd[1495]: time="2025-03-17T17:51:30.409775043Z" level=info msg="StopPodSandbox for \"122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492\"" Mar 17 17:51:30.410163 containerd[1495]: time="2025-03-17T17:51:30.410037224Z" level=info msg="Ensure that sandbox 122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492 in task-service has been cleanup successfully" Mar 17 17:51:30.410951 containerd[1495]: time="2025-03-17T17:51:30.410256816Z" level=info msg="TearDown network for sandbox \"122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492\" successfully" Mar 17 17:51:30.410951 containerd[1495]: time="2025-03-17T17:51:30.410278136Z" level=info msg="StopPodSandbox for \"122b7b1d893f5e19f79ea3bb913f35a27f709240139b8ef750626c7136bca492\" returns successfully" Mar 17 17:51:30.410951 containerd[1495]: time="2025-03-17T17:51:30.410763858Z" level=info msg="StopPodSandbox for \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\"" Mar 17 17:51:30.410951 containerd[1495]: time="2025-03-17T17:51:30.410853887Z" level=info msg="TearDown network for sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\" successfully" Mar 17 17:51:30.410951 containerd[1495]: time="2025-03-17T17:51:30.410865799Z" level=info msg="StopPodSandbox for \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\" returns successfully" Mar 17 17:51:30.413048 containerd[1495]: time="2025-03-17T17:51:30.411485893Z" level=info msg="StopPodSandbox for \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\"" Mar 17 17:51:30.413048 containerd[1495]: time="2025-03-17T17:51:30.411581622Z" level=info msg="TearDown network for sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\" successfully" Mar 17 17:51:30.413048 containerd[1495]: time="2025-03-17T17:51:30.411593234Z" level=info msg="StopPodSandbox for \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\" returns successfully" Mar 17 17:51:30.413048 containerd[1495]: time="2025-03-17T17:51:30.412517237Z" level=info msg="StopPodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\"" Mar 17 17:51:30.413048 containerd[1495]: time="2025-03-17T17:51:30.412608248Z" level=info msg="TearDown network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" successfully" Mar 17 17:51:30.413048 containerd[1495]: time="2025-03-17T17:51:30.412621132Z" level=info msg="StopPodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" returns successfully" Mar 17 17:51:30.413048 containerd[1495]: time="2025-03-17T17:51:30.412884557Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\"" Mar 17 17:51:30.413048 containerd[1495]: time="2025-03-17T17:51:30.412975688Z" level=info msg="TearDown network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" successfully" Mar 17 17:51:30.413048 containerd[1495]: time="2025-03-17T17:51:30.412987360Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" returns successfully" Mar 17 17:51:30.412953 systemd[1]: run-netns-cni\x2d989c2208\x2d8831\x2ddfd3\x2d06c4\x2db0d741902846.mount: Deactivated successfully. Mar 17 17:51:30.413696 kubelet[2599]: I0317 17:51:30.413148 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c" Mar 17 17:51:30.413861 containerd[1495]: time="2025-03-17T17:51:30.413831974Z" level=info msg="StopPodSandbox for \"4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c\"" Mar 17 17:51:30.414065 containerd[1495]: time="2025-03-17T17:51:30.414034134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:51:30.414381 containerd[1495]: time="2025-03-17T17:51:30.414081021Z" level=info msg="Ensure that sandbox 4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c in task-service has been cleanup successfully" Mar 17 17:51:30.414979 containerd[1495]: time="2025-03-17T17:51:30.414953368Z" level=info msg="TearDown network for sandbox \"4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c\" successfully" Mar 17 17:51:30.415087 containerd[1495]: time="2025-03-17T17:51:30.414992602Z" level=info msg="StopPodSandbox for \"4d3f17c2b1f81a79e9e0941508ed80d28168b7b000b565dc9d28f2f8a56aac1c\" returns successfully" Mar 17 17:51:30.415693 containerd[1495]: time="2025-03-17T17:51:30.415644144Z" level=info msg="StopPodSandbox for \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\"" Mar 17 17:51:30.415780 containerd[1495]: time="2025-03-17T17:51:30.415757647Z" level=info msg="TearDown network for sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\" successfully" Mar 17 17:51:30.415780 containerd[1495]: time="2025-03-17T17:51:30.415775300Z" level=info msg="StopPodSandbox for \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\" returns successfully" Mar 17 17:51:30.416919 systemd[1]: run-netns-cni\x2d71f33083\x2da566\x2d5f36\x2dccc4\x2deae01e772981.mount: Deactivated successfully. Mar 17 17:51:30.417494 containerd[1495]: time="2025-03-17T17:51:30.417232915Z" level=info msg="StopPodSandbox for \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\"" Mar 17 17:51:30.417494 containerd[1495]: time="2025-03-17T17:51:30.417350886Z" level=info msg="TearDown network for sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\" successfully" Mar 17 17:51:30.417494 containerd[1495]: time="2025-03-17T17:51:30.417364943Z" level=info msg="StopPodSandbox for \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\" returns successfully" Mar 17 17:51:30.418536 containerd[1495]: time="2025-03-17T17:51:30.418200300Z" level=info msg="StopPodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\"" Mar 17 17:51:30.418536 containerd[1495]: time="2025-03-17T17:51:30.418288616Z" level=info msg="TearDown network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" successfully" Mar 17 17:51:30.418536 containerd[1495]: time="2025-03-17T17:51:30.418298394Z" level=info msg="StopPodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" returns successfully" Mar 17 17:51:30.418536 containerd[1495]: time="2025-03-17T17:51:30.418504481Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\"" Mar 17 17:51:30.418727 containerd[1495]: time="2025-03-17T17:51:30.418595783Z" level=info msg="TearDown network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" successfully" Mar 17 17:51:30.418727 containerd[1495]: time="2025-03-17T17:51:30.418609709Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" returns successfully" Mar 17 17:51:30.419788 kubelet[2599]: E0317 17:51:30.418810 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:30.419870 containerd[1495]: time="2025-03-17T17:51:30.419128351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:5,}" Mar 17 17:51:30.420401 kubelet[2599]: I0317 17:51:30.420379 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c" Mar 17 17:51:30.425398 kubelet[2599]: I0317 17:51:30.425324 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0" Mar 17 17:51:30.426033 containerd[1495]: time="2025-03-17T17:51:30.425938619Z" level=info msg="StopPodSandbox for \"ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0\"" Mar 17 17:51:30.426252 containerd[1495]: time="2025-03-17T17:51:30.426213224Z" level=info msg="Ensure that sandbox ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0 in task-service has been cleanup successfully" Mar 17 17:51:30.429039 containerd[1495]: time="2025-03-17T17:51:30.426460339Z" level=info msg="TearDown network for sandbox \"ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0\" successfully" Mar 17 17:51:30.429039 containerd[1495]: time="2025-03-17T17:51:30.426479064Z" level=info msg="StopPodSandbox for \"ac25a47783466d6239e583f1421a6ed087f91b1756c0cc6ae88deafd7c261ff0\" returns successfully" Mar 17 17:51:30.429039 containerd[1495]: time="2025-03-17T17:51:30.427135596Z" level=info msg="StopPodSandbox for \"8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c\"" Mar 17 17:51:30.429039 containerd[1495]: time="2025-03-17T17:51:30.427300775Z" level=info msg="Ensure that sandbox 8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c in task-service has been cleanup successfully" Mar 17 17:51:30.429039 containerd[1495]: time="2025-03-17T17:51:30.427633259Z" level=info msg="StopPodSandbox for \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\"" Mar 17 17:51:30.429039 containerd[1495]: time="2025-03-17T17:51:30.427641895Z" level=info msg="TearDown network for sandbox \"8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c\" successfully" Mar 17 17:51:30.429039 containerd[1495]: time="2025-03-17T17:51:30.427759976Z" level=info msg="StopPodSandbox for \"8adf8759b12ff4a946ab03e0bb8831fef8e033b57b9d8746161339a1002a6b5c\" returns successfully" Mar 17 17:51:30.429039 containerd[1495]: time="2025-03-17T17:51:30.427750499Z" level=info msg="TearDown network for sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\" successfully" Mar 17 17:51:30.429039 containerd[1495]: time="2025-03-17T17:51:30.427808558Z" level=info msg="StopPodSandbox for \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\" returns successfully" Mar 17 17:51:30.429633 systemd[1]: run-netns-cni\x2d813b1307\x2dfce1\x2dcb45\x2d09e1\x2debb2a02fdee0.mount: Deactivated successfully. Mar 17 17:51:30.431144 containerd[1495]: time="2025-03-17T17:51:30.431116595Z" level=info msg="StopPodSandbox for \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\"" Mar 17 17:51:30.432757 containerd[1495]: time="2025-03-17T17:51:30.432572657Z" level=info msg="TearDown network for sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\" successfully" Mar 17 17:51:30.432757 containerd[1495]: time="2025-03-17T17:51:30.432599157Z" level=info msg="StopPodSandbox for \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\" returns successfully" Mar 17 17:51:30.432757 containerd[1495]: time="2025-03-17T17:51:30.431331678Z" level=info msg="StopPodSandbox for \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\"" Mar 17 17:51:30.432757 containerd[1495]: time="2025-03-17T17:51:30.432715565Z" level=info msg="TearDown network for sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\" successfully" Mar 17 17:51:30.432757 containerd[1495]: time="2025-03-17T17:51:30.432724171Z" level=info msg="StopPodSandbox for \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\" returns successfully" Mar 17 17:51:30.433187 systemd[1]: run-netns-cni\x2d68f25470\x2da7d9\x2d44dc\x2d4183\x2de1b49d12b708.mount: Deactivated successfully. Mar 17 17:51:30.433541 containerd[1495]: time="2025-03-17T17:51:30.433517790Z" level=info msg="StopPodSandbox for \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\"" Mar 17 17:51:30.433658 containerd[1495]: time="2025-03-17T17:51:30.433612047Z" level=info msg="TearDown network for sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\" successfully" Mar 17 17:51:30.433658 containerd[1495]: time="2025-03-17T17:51:30.433630682Z" level=info msg="StopPodSandbox for \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\" returns successfully" Mar 17 17:51:30.433817 containerd[1495]: time="2025-03-17T17:51:30.433686116Z" level=info msg="StopPodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\"" Mar 17 17:51:30.433817 containerd[1495]: time="2025-03-17T17:51:30.433777127Z" level=info msg="TearDown network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" successfully" Mar 17 17:51:30.433817 containerd[1495]: time="2025-03-17T17:51:30.433787817Z" level=info msg="StopPodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" returns successfully" Mar 17 17:51:30.434374 containerd[1495]: time="2025-03-17T17:51:30.434345744Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\"" Mar 17 17:51:30.434481 containerd[1495]: time="2025-03-17T17:51:30.434453015Z" level=info msg="TearDown network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" successfully" Mar 17 17:51:30.434481 containerd[1495]: time="2025-03-17T17:51:30.434461641Z" level=info msg="StopPodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\"" Mar 17 17:51:30.434585 containerd[1495]: time="2025-03-17T17:51:30.434561819Z" level=info msg="TearDown network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" successfully" Mar 17 17:51:30.434665 containerd[1495]: time="2025-03-17T17:51:30.434581947Z" level=info msg="StopPodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" returns successfully" Mar 17 17:51:30.434665 containerd[1495]: time="2025-03-17T17:51:30.434468784Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" returns successfully" Mar 17 17:51:30.435099 kubelet[2599]: I0317 17:51:30.435049 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2" Mar 17 17:51:30.435099 kubelet[2599]: E0317 17:51:30.435075 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:30.435651 containerd[1495]: time="2025-03-17T17:51:30.435609324Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\"" Mar 17 17:51:30.435719 containerd[1495]: time="2025-03-17T17:51:30.435683693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:5,}" Mar 17 17:51:30.435719 containerd[1495]: time="2025-03-17T17:51:30.435712228Z" level=info msg="TearDown network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" successfully" Mar 17 17:51:30.435788 containerd[1495]: time="2025-03-17T17:51:30.435726495Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" returns successfully" Mar 17 17:51:30.435788 containerd[1495]: time="2025-03-17T17:51:30.435616698Z" level=info msg="StopPodSandbox for \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\"" Mar 17 17:51:30.436003 containerd[1495]: time="2025-03-17T17:51:30.435938662Z" level=info msg="Ensure that sandbox 330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2 in task-service has been cleanup successfully" Mar 17 17:51:30.436204 containerd[1495]: time="2025-03-17T17:51:30.436139780Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\"" Mar 17 17:51:30.436269 containerd[1495]: time="2025-03-17T17:51:30.436244927Z" level=info msg="TearDown network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" successfully" Mar 17 17:51:30.436269 containerd[1495]: time="2025-03-17T17:51:30.436256869Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" returns successfully" Mar 17 17:51:30.436354 containerd[1495]: time="2025-03-17T17:51:30.436335897Z" level=info msg="TearDown network for sandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\" successfully" Mar 17 17:51:30.436354 containerd[1495]: time="2025-03-17T17:51:30.436350905Z" level=info msg="StopPodSandbox for \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\" returns successfully" Mar 17 17:51:30.437292 containerd[1495]: time="2025-03-17T17:51:30.437260303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:6,}" Mar 17 17:51:30.437616 containerd[1495]: time="2025-03-17T17:51:30.437261986Z" level=info msg="StopPodSandbox for \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\"" Mar 17 17:51:30.437616 containerd[1495]: time="2025-03-17T17:51:30.437543944Z" level=info msg="TearDown network for sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\" successfully" Mar 17 17:51:30.437616 containerd[1495]: time="2025-03-17T17:51:30.437556588Z" level=info msg="StopPodSandbox for \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\" returns successfully" Mar 17 17:51:30.437799 containerd[1495]: time="2025-03-17T17:51:30.437779085Z" level=info msg="StopPodSandbox for \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\"" Mar 17 17:51:30.437887 containerd[1495]: time="2025-03-17T17:51:30.437871178Z" level=info msg="TearDown network for sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\" successfully" Mar 17 17:51:30.437909 containerd[1495]: time="2025-03-17T17:51:30.437885154Z" level=info msg="StopPodSandbox for \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\" returns successfully" Mar 17 17:51:30.438205 containerd[1495]: time="2025-03-17T17:51:30.438169719Z" level=info msg="StopPodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\"" Mar 17 17:51:30.438373 containerd[1495]: time="2025-03-17T17:51:30.438254628Z" level=info msg="TearDown network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" successfully" Mar 17 17:51:30.438373 containerd[1495]: time="2025-03-17T17:51:30.438263805Z" level=info msg="StopPodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" returns successfully" Mar 17 17:51:30.439289 containerd[1495]: time="2025-03-17T17:51:30.438835657Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\"" Mar 17 17:51:30.439289 containerd[1495]: time="2025-03-17T17:51:30.438934293Z" level=info msg="TearDown network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" successfully" Mar 17 17:51:30.439289 containerd[1495]: time="2025-03-17T17:51:30.438947378Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" returns successfully" Mar 17 17:51:30.439775 containerd[1495]: time="2025-03-17T17:51:30.439530862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:5,}" Mar 17 17:51:30.439868 kubelet[2599]: E0317 17:51:30.439553 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:30.445895 kubelet[2599]: I0317 17:51:30.445862 2599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d" Mar 17 17:51:30.446965 containerd[1495]: time="2025-03-17T17:51:30.446694282Z" level=info msg="StopPodSandbox for \"49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d\"" Mar 17 17:51:30.447055 containerd[1495]: time="2025-03-17T17:51:30.446962536Z" level=info msg="Ensure that sandbox 49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d in task-service has been cleanup successfully" Mar 17 17:51:30.447890 containerd[1495]: time="2025-03-17T17:51:30.447241038Z" level=info msg="TearDown network for sandbox \"49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d\" successfully" Mar 17 17:51:30.447890 containerd[1495]: time="2025-03-17T17:51:30.447260745Z" level=info msg="StopPodSandbox for \"49b3f06caf72e8181566d1132f3c5b28eb96b2740762fa305748ca9eed71be9d\" returns successfully" Mar 17 17:51:30.447890 containerd[1495]: time="2025-03-17T17:51:30.447741358Z" level=info msg="StopPodSandbox for \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\"" Mar 17 17:51:30.448002 containerd[1495]: time="2025-03-17T17:51:30.447849590Z" level=info msg="TearDown network for sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\" successfully" Mar 17 17:51:30.448002 containerd[1495]: time="2025-03-17T17:51:30.447923810Z" level=info msg="StopPodSandbox for \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\" returns successfully" Mar 17 17:51:30.449275 containerd[1495]: time="2025-03-17T17:51:30.449106419Z" level=info msg="StopPodSandbox for \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\"" Mar 17 17:51:30.449275 containerd[1495]: time="2025-03-17T17:51:30.449208180Z" level=info msg="TearDown network for sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\" successfully" Mar 17 17:51:30.449275 containerd[1495]: time="2025-03-17T17:51:30.449220984Z" level=info msg="StopPodSandbox for \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\" returns successfully" Mar 17 17:51:30.449953 containerd[1495]: time="2025-03-17T17:51:30.449914766Z" level=info msg="StopPodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\"" Mar 17 17:51:30.450106 containerd[1495]: time="2025-03-17T17:51:30.450086337Z" level=info msg="TearDown network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" successfully" Mar 17 17:51:30.450106 containerd[1495]: time="2025-03-17T17:51:30.450101786Z" level=info msg="StopPodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" returns successfully" Mar 17 17:51:30.450369 containerd[1495]: time="2025-03-17T17:51:30.450350603Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\"" Mar 17 17:51:30.450438 containerd[1495]: time="2025-03-17T17:51:30.450421526Z" level=info msg="TearDown network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" successfully" Mar 17 17:51:30.450438 containerd[1495]: time="2025-03-17T17:51:30.450435552Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" returns successfully" Mar 17 17:51:30.450712 containerd[1495]: time="2025-03-17T17:51:30.450676374Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\"" Mar 17 17:51:30.450817 containerd[1495]: time="2025-03-17T17:51:30.450760843Z" level=info msg="TearDown network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" successfully" Mar 17 17:51:30.450817 containerd[1495]: time="2025-03-17T17:51:30.450770962Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" returns successfully" Mar 17 17:51:30.451329 containerd[1495]: time="2025-03-17T17:51:30.451295395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:6,}" Mar 17 17:51:30.938349 systemd-networkd[1423]: cali899860f80b6: Link UP Mar 17 17:51:30.939222 systemd-networkd[1423]: cali899860f80b6: Gained carrier Mar 17 17:51:30.947709 kubelet[2599]: I0317 17:51:30.947058 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5xw47" podStartSLOduration=2.9310273369999997 podStartE2EDuration="42.946991092s" podCreationTimestamp="2025-03-17 17:50:48 +0000 UTC" firstStartedPulling="2025-03-17 17:50:49.336395185 +0000 UTC m=+17.744841153" lastFinishedPulling="2025-03-17 17:51:29.35235894 +0000 UTC m=+57.760804908" observedRunningTime="2025-03-17 17:51:30.459429838 +0000 UTC m=+58.867875806" watchObservedRunningTime="2025-03-17 17:51:30.946991092 +0000 UTC m=+59.355437060" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.608 [INFO][4775] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.622 [INFO][4775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0 calico-kube-controllers-7d6b67b85- calico-system 4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e 792 0 2025-03-17 17:50:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d6b67b85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7d6b67b85-j5xwp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali899860f80b6 [] []}} ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Namespace="calico-system" Pod="calico-kube-controllers-7d6b67b85-j5xwp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.622 [INFO][4775] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Namespace="calico-system" Pod="calico-kube-controllers-7d6b67b85-j5xwp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.831 [INFO][4842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" HandleID="k8s-pod-network.4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Workload="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.879 [INFO][4842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" HandleID="k8s-pod-network.4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Workload="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000406590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7d6b67b85-j5xwp", "timestamp":"2025-03-17 17:51:30.830936363 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.879 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.879 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.879 [INFO][4842] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.882 [INFO][4842] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" host="localhost" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.888 [INFO][4842] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.894 [INFO][4842] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.896 [INFO][4842] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.898 [INFO][4842] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.898 [INFO][4842] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" host="localhost" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.899 [INFO][4842] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.905 [INFO][4842] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" host="localhost" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.925 [INFO][4842] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" host="localhost" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.925 [INFO][4842] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" host="localhost" Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.925 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:51:30.950078 containerd[1495]: 2025-03-17 17:51:30.925 [INFO][4842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" HandleID="k8s-pod-network.4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Workload="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0" Mar 17 17:51:30.950752 containerd[1495]: 2025-03-17 17:51:30.931 [INFO][4775] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Namespace="calico-system" Pod="calico-kube-controllers-7d6b67b85-j5xwp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0", GenerateName:"calico-kube-controllers-7d6b67b85-", Namespace:"calico-system", SelfLink:"", UID:"4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d6b67b85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7d6b67b85-j5xwp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali899860f80b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:30.950752 containerd[1495]: 2025-03-17 17:51:30.931 [INFO][4775] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Namespace="calico-system" Pod="calico-kube-controllers-7d6b67b85-j5xwp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0" Mar 17 17:51:30.950752 containerd[1495]: 2025-03-17 17:51:30.931 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali899860f80b6 ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Namespace="calico-system" Pod="calico-kube-controllers-7d6b67b85-j5xwp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0" Mar 17 17:51:30.950752 containerd[1495]: 2025-03-17 17:51:30.938 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Namespace="calico-system" Pod="calico-kube-controllers-7d6b67b85-j5xwp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0" Mar 17 17:51:30.950752 containerd[1495]: 2025-03-17 17:51:30.939 [INFO][4775] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Namespace="calico-system" Pod="calico-kube-controllers-7d6b67b85-j5xwp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0", GenerateName:"calico-kube-controllers-7d6b67b85-", Namespace:"calico-system", SelfLink:"", UID:"4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d6b67b85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b", Pod:"calico-kube-controllers-7d6b67b85-j5xwp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali899860f80b6", MAC:"36:53:08:eb:27:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:30.950752 containerd[1495]: 2025-03-17 17:51:30.947 [INFO][4775] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b" Namespace="calico-system" Pod="calico-kube-controllers-7d6b67b85-j5xwp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d6b67b85--j5xwp-eth0" Mar 17 17:51:30.998390 containerd[1495]: time="2025-03-17T17:51:30.998130604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:51:30.998390 containerd[1495]: time="2025-03-17T17:51:30.998186278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:51:30.998390 containerd[1495]: time="2025-03-17T17:51:30.998196217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:30.998390 containerd[1495]: time="2025-03-17T17:51:30.998284292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:31.027439 systemd[1]: Started cri-containerd-4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b.scope - libcontainer container 4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b. Mar 17 17:51:31.047763 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:51:31.084727 containerd[1495]: time="2025-03-17T17:51:31.084579100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d6b67b85-j5xwp,Uid:4ee3dfd7-d4c4-495e-b4fa-6712bcf8d78e,Namespace:calico-system,Attempt:5,} returns sandbox id \"4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b\"" Mar 17 17:51:31.086359 containerd[1495]: time="2025-03-17T17:51:31.086326522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 17 17:51:31.216780 systemd[1]: run-netns-cni\x2deec59655\x2d69ac\x2dc10e\x2d9f75\x2dad0c6caa59c2.mount: Deactivated successfully. Mar 17 17:51:31.216886 systemd[1]: run-netns-cni\x2d39c8a982\x2d1f08\x2d7ca3\x2d2fa1\x2d6b154e623baf.mount: Deactivated successfully. Mar 17 17:51:31.441910 systemd-networkd[1423]: calid73f4f4eba2: Link UP Mar 17 17:51:31.442443 systemd-networkd[1423]: calid73f4f4eba2: Gained carrier Mar 17 17:51:31.451834 kubelet[2599]: E0317 17:51:31.451789 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:30.592 [INFO][4782] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:30.798 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0 coredns-668d6bf9bc- kube-system 8c4845cd-7043-485d-9bdd-731020b2270e 793 0 2025-03-17 17:50:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-nk8jr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid73f4f4eba2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Namespace="kube-system" Pod="coredns-668d6bf9bc-nk8jr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nk8jr-" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:30.798 [INFO][4782] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Namespace="kube-system" Pod="coredns-668d6bf9bc-nk8jr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:30.845 [INFO][4853] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" HandleID="k8s-pod-network.219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Workload="localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:30.879 [INFO][4853] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" HandleID="k8s-pod-network.219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Workload="localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307650), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-nk8jr", "timestamp":"2025-03-17 17:51:30.845070573 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:30.879 [INFO][4853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:30.925 [INFO][4853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:30.926 [INFO][4853] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:30.982 [INFO][4853] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" host="localhost" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.077 [INFO][4853] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.253 [INFO][4853] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.255 [INFO][4853] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.257 [INFO][4853] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.257 [INFO][4853] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" host="localhost" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.258 [INFO][4853] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.408 [INFO][4853] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" host="localhost" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.435 [INFO][4853] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" host="localhost" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.436 [INFO][4853] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" host="localhost" Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.436 [INFO][4853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:51:31.494330 containerd[1495]: 2025-03-17 17:51:31.436 [INFO][4853] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" HandleID="k8s-pod-network.219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Workload="localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0" Mar 17 17:51:31.495856 containerd[1495]: 2025-03-17 17:51:31.439 [INFO][4782] cni-plugin/k8s.go 386: Populated endpoint ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Namespace="kube-system" Pod="coredns-668d6bf9bc-nk8jr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8c4845cd-7043-485d-9bdd-731020b2270e", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-nk8jr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid73f4f4eba2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:31.495856 containerd[1495]: 2025-03-17 17:51:31.439 [INFO][4782] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Namespace="kube-system" Pod="coredns-668d6bf9bc-nk8jr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0" Mar 17 17:51:31.495856 containerd[1495]: 2025-03-17 17:51:31.439 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid73f4f4eba2 ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Namespace="kube-system" Pod="coredns-668d6bf9bc-nk8jr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0" Mar 17 17:51:31.495856 containerd[1495]: 2025-03-17 17:51:31.442 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Namespace="kube-system" Pod="coredns-668d6bf9bc-nk8jr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0" Mar 17 17:51:31.495856 containerd[1495]: 2025-03-17 17:51:31.442 [INFO][4782] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Namespace="kube-system" Pod="coredns-668d6bf9bc-nk8jr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8c4845cd-7043-485d-9bdd-731020b2270e", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa", Pod:"coredns-668d6bf9bc-nk8jr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid73f4f4eba2", MAC:"fe:f9:dd:e2:79:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:31.495856 containerd[1495]: 2025-03-17 17:51:31.486 [INFO][4782] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa" Namespace="kube-system" Pod="coredns-668d6bf9bc-nk8jr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nk8jr-eth0" Mar 17 17:51:31.716005 systemd-networkd[1423]: cali5bab4aeb827: Link UP Mar 17 17:51:31.716517 systemd-networkd[1423]: cali5bab4aeb827: Gained carrier Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:30.491 [INFO][4726] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:30.521 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--db9856--swh96-eth0 calico-apiserver-db9856- calico-apiserver 802f1eaf-7d52-4b00-9fa9-f37418e92a64 791 0 2025-03-17 17:50:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:db9856 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-db9856-swh96 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5bab4aeb827 [] []}} ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-swh96" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--swh96-" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:30.521 [INFO][4726] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-swh96" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--swh96-eth0" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:30.820 [INFO][4784] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" HandleID="k8s-pod-network.f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Workload="localhost-k8s-calico--apiserver--db9856--swh96-eth0" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:30.880 [INFO][4784] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" HandleID="k8s-pod-network.f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Workload="localhost-k8s-calico--apiserver--db9856--swh96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036d830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-db9856-swh96", "timestamp":"2025-03-17 17:51:30.820386028 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:30.880 [INFO][4784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.436 [INFO][4784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.436 [INFO][4784] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.439 [INFO][4784] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" host="localhost" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.447 [INFO][4784] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.505 [INFO][4784] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.507 [INFO][4784] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.668 [INFO][4784] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.668 [INFO][4784] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" host="localhost" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.671 [INFO][4784] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.693 [INFO][4784] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" host="localhost" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.707 [INFO][4784] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" host="localhost" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.707 [INFO][4784] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" host="localhost" Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.707 [INFO][4784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:51:31.741620 containerd[1495]: 2025-03-17 17:51:31.707 [INFO][4784] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" HandleID="k8s-pod-network.f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Workload="localhost-k8s-calico--apiserver--db9856--swh96-eth0" Mar 17 17:51:31.742645 containerd[1495]: 2025-03-17 17:51:31.712 [INFO][4726] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-swh96" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--swh96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--db9856--swh96-eth0", GenerateName:"calico-apiserver-db9856-", Namespace:"calico-apiserver", SelfLink:"", UID:"802f1eaf-7d52-4b00-9fa9-f37418e92a64", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db9856", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-db9856-swh96", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bab4aeb827", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:31.742645 containerd[1495]: 2025-03-17 17:51:31.712 [INFO][4726] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-swh96" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--swh96-eth0" Mar 17 17:51:31.742645 containerd[1495]: 2025-03-17 17:51:31.712 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bab4aeb827 ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-swh96" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--swh96-eth0" Mar 17 17:51:31.742645 containerd[1495]: 2025-03-17 17:51:31.716 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-swh96" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--swh96-eth0" Mar 17 17:51:31.742645 containerd[1495]: 2025-03-17 17:51:31.717 [INFO][4726] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-swh96" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--swh96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--db9856--swh96-eth0", GenerateName:"calico-apiserver-db9856-", Namespace:"calico-apiserver", SelfLink:"", UID:"802f1eaf-7d52-4b00-9fa9-f37418e92a64", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db9856", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce", Pod:"calico-apiserver-db9856-swh96", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bab4aeb827", MAC:"86:9a:93:b4:e2:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:31.742645 containerd[1495]: 2025-03-17 17:51:31.738 [INFO][4726] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-swh96" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--swh96-eth0" Mar 17 17:51:31.755125 containerd[1495]: time="2025-03-17T17:51:31.754179841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:51:31.755125 containerd[1495]: time="2025-03-17T17:51:31.754306920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:51:31.755125 containerd[1495]: time="2025-03-17T17:51:31.754322169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:31.755125 containerd[1495]: time="2025-03-17T17:51:31.754519889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:31.772193 containerd[1495]: time="2025-03-17T17:51:31.772151560Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\"" Mar 17 17:51:31.772307 containerd[1495]: time="2025-03-17T17:51:31.772257790Z" level=info msg="TearDown network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" successfully" Mar 17 17:51:31.772307 containerd[1495]: time="2025-03-17T17:51:31.772267979Z" level=info msg="StopPodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" returns successfully" Mar 17 17:51:31.785238 systemd[1]: Started cri-containerd-219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa.scope - libcontainer container 219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa. Mar 17 17:51:31.798899 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:51:31.829573 containerd[1495]: time="2025-03-17T17:51:31.829498552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nk8jr,Uid:8c4845cd-7043-485d-9bdd-731020b2270e,Namespace:kube-system,Attempt:5,} returns sandbox id \"219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa\"" Mar 17 17:51:31.830478 kubelet[2599]: E0317 17:51:31.830443 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:31.832075 containerd[1495]: time="2025-03-17T17:51:31.832040124Z" level=info msg="CreateContainer within sandbox \"219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:51:31.859592 containerd[1495]: time="2025-03-17T17:51:31.859541519Z" level=info msg="RemovePodSandbox for \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\"" Mar 17 17:51:31.873056 containerd[1495]: time="2025-03-17T17:51:31.872904233Z" level=info msg="Forcibly stopping sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\"" Mar 17 17:51:31.873370 containerd[1495]: time="2025-03-17T17:51:31.873093538Z" level=info msg="TearDown network for sandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" successfully" Mar 17 17:51:31.882900 containerd[1495]: time="2025-03-17T17:51:31.882466199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:51:31.882900 containerd[1495]: time="2025-03-17T17:51:31.882734493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:51:31.882900 containerd[1495]: time="2025-03-17T17:51:31.882754761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:31.883654 containerd[1495]: time="2025-03-17T17:51:31.883392539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:31.900688 containerd[1495]: time="2025-03-17T17:51:31.900646640Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:31.901086 containerd[1495]: time="2025-03-17T17:51:31.900983663Z" level=info msg="RemovePodSandbox \"ab5ca27476b38916f05e02fb891a602c7f2ed5ca72ab45f4abc3ebea237758bf\" returns successfully" Mar 17 17:51:31.901452 containerd[1495]: time="2025-03-17T17:51:31.901430403Z" level=info msg="CreateContainer within sandbox \"219dc6844fd05c25b8627884846207a2514b264d5c4b73d6f39fcfe2a5142ffa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b3ba10785bb4fc8591875bfced5a56b5d75b7c20bbc4a171b73e9a15ef89375\"" Mar 17 17:51:31.903084 containerd[1495]: time="2025-03-17T17:51:31.902537752Z" level=info msg="StartContainer for \"2b3ba10785bb4fc8591875bfced5a56b5d75b7c20bbc4a171b73e9a15ef89375\"" Mar 17 17:51:31.903531 containerd[1495]: time="2025-03-17T17:51:31.903513555Z" level=info msg="StopPodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\"" Mar 17 17:51:31.903694 containerd[1495]: time="2025-03-17T17:51:31.903678484Z" level=info msg="TearDown network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" successfully" Mar 17 17:51:31.903958 containerd[1495]: time="2025-03-17T17:51:31.903887417Z" level=info msg="StopPodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" returns successfully" Mar 17 17:51:31.904828 containerd[1495]: time="2025-03-17T17:51:31.904798497Z" level=info msg="RemovePodSandbox for \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\"" Mar 17 17:51:31.904940 containerd[1495]: time="2025-03-17T17:51:31.904925015Z" level=info msg="Forcibly stopping sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\"" Mar 17 17:51:31.905956 containerd[1495]: time="2025-03-17T17:51:31.905083052Z" level=info msg="TearDown network for sandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" successfully" Mar 17 17:51:31.914601 systemd[1]: Started cri-containerd-f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce.scope - libcontainer container f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce. Mar 17 17:51:31.923130 containerd[1495]: time="2025-03-17T17:51:31.923076351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:31.923468 containerd[1495]: time="2025-03-17T17:51:31.923300482Z" level=info msg="RemovePodSandbox \"c383000bdbfc75bc913c41199db28c45179722e99e3fc996aced1021b22e2e13\" returns successfully" Mar 17 17:51:31.924594 containerd[1495]: time="2025-03-17T17:51:31.924200392Z" level=info msg="StopPodSandbox for \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\"" Mar 17 17:51:31.924594 containerd[1495]: time="2025-03-17T17:51:31.924319566Z" level=info msg="TearDown network for sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\" successfully" Mar 17 17:51:31.924594 containerd[1495]: time="2025-03-17T17:51:31.924330106Z" level=info msg="StopPodSandbox for \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\" returns successfully" Mar 17 17:51:31.924950 containerd[1495]: time="2025-03-17T17:51:31.924930493Z" level=info msg="RemovePodSandbox for \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\"" Mar 17 17:51:31.925067 containerd[1495]: time="2025-03-17T17:51:31.925039959Z" level=info msg="Forcibly stopping sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\"" Mar 17 17:51:31.925281 containerd[1495]: time="2025-03-17T17:51:31.925234034Z" level=info msg="TearDown network for sandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\" successfully" Mar 17 17:51:31.929059 kernel: bpftool[5202]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:51:31.938426 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:51:31.941512 systemd-networkd[1423]: cali86c90edc25c: Link UP Mar 17 17:51:31.943151 systemd-networkd[1423]: cali86c90edc25c: Gained carrier Mar 17 17:51:31.946760 containerd[1495]: time="2025-03-17T17:51:31.943846756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:31.946760 containerd[1495]: time="2025-03-17T17:51:31.944247929Z" level=info msg="RemovePodSandbox \"25dd480968659bfe50f0d3e17876eff4994504e6bee36164d7809729a8f1de12\" returns successfully" Mar 17 17:51:31.946760 containerd[1495]: time="2025-03-17T17:51:31.946032891Z" level=info msg="StopPodSandbox for \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\"" Mar 17 17:51:31.946760 containerd[1495]: time="2025-03-17T17:51:31.946157415Z" level=info msg="TearDown network for sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\" successfully" Mar 17 17:51:31.946760 containerd[1495]: time="2025-03-17T17:51:31.946221015Z" level=info msg="StopPodSandbox for \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\" returns successfully" Mar 17 17:51:31.946180 systemd[1]: Started cri-containerd-2b3ba10785bb4fc8591875bfced5a56b5d75b7c20bbc4a171b73e9a15ef89375.scope - libcontainer container 2b3ba10785bb4fc8591875bfced5a56b5d75b7c20bbc4a171b73e9a15ef89375. Mar 17 17:51:31.948901 containerd[1495]: time="2025-03-17T17:51:31.948450922Z" level=info msg="RemovePodSandbox for \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\"" Mar 17 17:51:31.948901 containerd[1495]: time="2025-03-17T17:51:31.948477893Z" level=info msg="Forcibly stopping sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\"" Mar 17 17:51:31.948901 containerd[1495]: time="2025-03-17T17:51:31.948810778Z" level=info msg="TearDown network for sandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\" successfully" Mar 17 17:51:31.955432 containerd[1495]: time="2025-03-17T17:51:31.955300475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:31.955581 containerd[1495]: time="2025-03-17T17:51:31.955399110Z" level=info msg="RemovePodSandbox \"64ee7dde075136cd6c0ff4e9e4d5018ec194fc3eeceee0436462144828e657f9\" returns successfully" Mar 17 17:51:31.956680 containerd[1495]: time="2025-03-17T17:51:31.956599805Z" level=info msg="StopPodSandbox for \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\"" Mar 17 17:51:31.956930 containerd[1495]: time="2025-03-17T17:51:31.956855044Z" level=info msg="TearDown network for sandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\" successfully" Mar 17 17:51:31.956930 containerd[1495]: time="2025-03-17T17:51:31.956869240Z" level=info msg="StopPodSandbox for \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\" returns successfully" Mar 17 17:51:31.957493 containerd[1495]: time="2025-03-17T17:51:31.957468436Z" level=info msg="RemovePodSandbox for \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\"" Mar 17 17:51:31.957607 containerd[1495]: time="2025-03-17T17:51:31.957589984Z" level=info msg="Forcibly stopping sandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\"" Mar 17 17:51:31.958263 containerd[1495]: time="2025-03-17T17:51:31.957728184Z" level=info msg="TearDown network for sandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\" successfully" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:30.554 [INFO][4760] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:30.588 [INFO][4760] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0 coredns-668d6bf9bc- kube-system b10ce8b2-d481-4335-85f1-af093a79a238 789 0 2025-03-17 17:50:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-t9ppl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86c90edc25c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9ppl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t9ppl-" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:30.589 [INFO][4760] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9ppl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:30.820 [INFO][4827] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" HandleID="k8s-pod-network.69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Workload="localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:30.881 [INFO][4827] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" HandleID="k8s-pod-network.69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Workload="localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392d70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-t9ppl", "timestamp":"2025-03-17 17:51:30.820542892 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:30.881 [INFO][4827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.707 [INFO][4827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.707 [INFO][4827] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.710 [INFO][4827] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" host="localhost" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.860 [INFO][4827] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.872 [INFO][4827] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.877 [INFO][4827] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.890 [INFO][4827] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.890 [INFO][4827] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" host="localhost" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.893 [INFO][4827] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177 Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.897 [INFO][4827] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" host="localhost" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.909 [INFO][4827] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" host="localhost" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.910 [INFO][4827] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" host="localhost" Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.910 [INFO][4827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:51:31.959524 containerd[1495]: 2025-03-17 17:51:31.910 [INFO][4827] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" HandleID="k8s-pod-network.69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Workload="localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0" Mar 17 17:51:31.960127 containerd[1495]: 2025-03-17 17:51:31.926 [INFO][4760] cni-plugin/k8s.go 386: Populated endpoint ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9ppl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b10ce8b2-d481-4335-85f1-af093a79a238", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-t9ppl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86c90edc25c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:31.960127 containerd[1495]: 2025-03-17 17:51:31.927 [INFO][4760] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9ppl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0" Mar 17 17:51:31.960127 containerd[1495]: 2025-03-17 17:51:31.928 [INFO][4760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86c90edc25c ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9ppl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0" Mar 17 17:51:31.960127 containerd[1495]: 2025-03-17 17:51:31.943 [INFO][4760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9ppl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0" Mar 17 17:51:31.960127 containerd[1495]: 2025-03-17 17:51:31.944 [INFO][4760] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9ppl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b10ce8b2-d481-4335-85f1-af093a79a238", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177", Pod:"coredns-668d6bf9bc-t9ppl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86c90edc25c", MAC:"2a:c1:b8:e7:1f:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:31.960127 containerd[1495]: 2025-03-17 17:51:31.953 [INFO][4760] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9ppl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t9ppl-eth0" Mar 17 17:51:31.961888 containerd[1495]: time="2025-03-17T17:51:31.961865874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:31.961996 containerd[1495]: time="2025-03-17T17:51:31.961982433Z" level=info msg="RemovePodSandbox \"330a159792e66ced1c43f48d9ad3ac26e6a568a1bd1f57d4357afbbff34cefb2\" returns successfully" Mar 17 17:51:31.962451 containerd[1495]: time="2025-03-17T17:51:31.962414094Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\"" Mar 17 17:51:31.962551 containerd[1495]: time="2025-03-17T17:51:31.962532035Z" level=info msg="TearDown network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" successfully" Mar 17 17:51:31.962588 containerd[1495]: time="2025-03-17T17:51:31.962560398Z" level=info msg="StopPodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" returns successfully" Mar 17 17:51:31.962923 containerd[1495]: time="2025-03-17T17:51:31.962886992Z" level=info msg="RemovePodSandbox for \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\"" Mar 17 17:51:31.962923 containerd[1495]: time="2025-03-17T17:51:31.962910666Z" level=info msg="Forcibly stopping sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\"" Mar 17 17:51:31.963085 containerd[1495]: time="2025-03-17T17:51:31.962973955Z" level=info msg="TearDown network for sandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" successfully" Mar 17 17:51:31.967117 containerd[1495]: time="2025-03-17T17:51:31.966627907Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:31.967117 containerd[1495]: time="2025-03-17T17:51:31.966677139Z" level=info msg="RemovePodSandbox \"3e3ed683c84cdf993bdf7652a727c2e03891d4b841b79faae2a62a59c10972b8\" returns successfully" Mar 17 17:51:31.967117 containerd[1495]: time="2025-03-17T17:51:31.966970620Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\"" Mar 17 17:51:31.967117 containerd[1495]: time="2025-03-17T17:51:31.967070428Z" level=info msg="TearDown network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" successfully" Mar 17 17:51:31.967117 containerd[1495]: time="2025-03-17T17:51:31.967080396Z" level=info msg="StopPodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" returns successfully" Mar 17 17:51:31.967651 containerd[1495]: time="2025-03-17T17:51:31.967632644Z" level=info msg="RemovePodSandbox for \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\"" Mar 17 17:51:31.967716 containerd[1495]: time="2025-03-17T17:51:31.967703907Z" level=info msg="Forcibly stopping sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\"" Mar 17 17:51:31.968113 containerd[1495]: time="2025-03-17T17:51:31.967815356Z" level=info msg="TearDown network for sandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" successfully" Mar 17 17:51:31.986183 containerd[1495]: time="2025-03-17T17:51:31.986142533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:31.986376 containerd[1495]: time="2025-03-17T17:51:31.986360232Z" level=info msg="RemovePodSandbox \"3a287b471345919edb3a7639a2d16c54930291b1e5e9c520c2543dc01a0641d4\" returns successfully" Mar 17 17:51:31.987398 containerd[1495]: time="2025-03-17T17:51:31.986952043Z" level=info msg="StopPodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\"" Mar 17 17:51:31.987398 containerd[1495]: time="2025-03-17T17:51:31.987078571Z" level=info msg="TearDown network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" successfully" Mar 17 17:51:31.987398 containerd[1495]: time="2025-03-17T17:51:31.987096304Z" level=info msg="StopPodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" returns successfully" Mar 17 17:51:31.988202 containerd[1495]: time="2025-03-17T17:51:31.987862362Z" level=info msg="RemovePodSandbox for \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\"" Mar 17 17:51:31.988202 containerd[1495]: time="2025-03-17T17:51:31.987882791Z" level=info msg="Forcibly stopping sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\"" Mar 17 17:51:31.988202 containerd[1495]: time="2025-03-17T17:51:31.987986987Z" level=info msg="TearDown network for sandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" successfully" Mar 17 17:51:32.002286 containerd[1495]: time="2025-03-17T17:51:32.002229895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.002642 containerd[1495]: time="2025-03-17T17:51:32.002496246Z" level=info msg="RemovePodSandbox \"b6b3d7aea30fda8bae3bbefd9f279c8db4c1f2e037ac2da59423d40949d4333e\" returns successfully" Mar 17 17:51:32.003197 containerd[1495]: time="2025-03-17T17:51:32.003165754Z" level=info msg="StopPodSandbox for \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\"" Mar 17 17:51:32.003443 containerd[1495]: time="2025-03-17T17:51:32.003426825Z" level=info msg="TearDown network for sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\" successfully" Mar 17 17:51:32.003558 containerd[1495]: time="2025-03-17T17:51:32.003543434Z" level=info msg="StopPodSandbox for \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\" returns successfully" Mar 17 17:51:32.005389 containerd[1495]: time="2025-03-17T17:51:32.004658821Z" level=info msg="RemovePodSandbox for \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\"" Mar 17 17:51:32.005516 containerd[1495]: time="2025-03-17T17:51:32.005478832Z" level=info msg="Forcibly stopping sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\"" Mar 17 17:51:32.005895 containerd[1495]: time="2025-03-17T17:51:32.005841824Z" level=info msg="TearDown network for sandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\" successfully" Mar 17 17:51:32.019727 containerd[1495]: time="2025-03-17T17:51:32.019287214Z" level=info msg="StartContainer for \"2b3ba10785bb4fc8591875bfced5a56b5d75b7c20bbc4a171b73e9a15ef89375\" returns successfully" Mar 17 17:51:32.019727 containerd[1495]: time="2025-03-17T17:51:32.019315928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-swh96,Uid:802f1eaf-7d52-4b00-9fa9-f37418e92a64,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce\"" Mar 17 17:51:32.020897 containerd[1495]: time="2025-03-17T17:51:32.020515002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.020897 containerd[1495]: time="2025-03-17T17:51:32.020587898Z" level=info msg="RemovePodSandbox \"8283abb1d7f07f2e1772939ea0e998f374a695aaefe53c1b9284af89adbe85b1\" returns successfully" Mar 17 17:51:32.021216 containerd[1495]: time="2025-03-17T17:51:32.021185732Z" level=info msg="StopPodSandbox for \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\"" Mar 17 17:51:32.029760 containerd[1495]: time="2025-03-17T17:51:32.029552407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:51:32.029760 containerd[1495]: time="2025-03-17T17:51:32.029616037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:51:32.029760 containerd[1495]: time="2025-03-17T17:51:32.029629352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:32.029760 containerd[1495]: time="2025-03-17T17:51:32.029701317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:32.031761 containerd[1495]: time="2025-03-17T17:51:32.031689434Z" level=info msg="TearDown network for sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\" successfully" Mar 17 17:51:32.031888 containerd[1495]: time="2025-03-17T17:51:32.031875133Z" level=info msg="StopPodSandbox for \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\" returns successfully" Mar 17 17:51:32.032436 containerd[1495]: time="2025-03-17T17:51:32.032381855Z" level=info msg="RemovePodSandbox for \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\"" Mar 17 17:51:32.032642 containerd[1495]: time="2025-03-17T17:51:32.032610425Z" level=info msg="Forcibly stopping sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\"" Mar 17 17:51:32.032730 containerd[1495]: time="2025-03-17T17:51:32.032710653Z" level=info msg="TearDown network for sandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\" successfully" Mar 17 17:51:32.040934 containerd[1495]: time="2025-03-17T17:51:32.040880077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.041916 containerd[1495]: time="2025-03-17T17:51:32.041799896Z" level=info msg="RemovePodSandbox \"bad719ef2bd8472495fd310f3c4bd1ad2f5f9caa1e2a7ea36599d5afee4c3d25\" returns successfully" Mar 17 17:51:32.042748 containerd[1495]: time="2025-03-17T17:51:32.042407237Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\"" Mar 17 17:51:32.042748 containerd[1495]: time="2025-03-17T17:51:32.042523006Z" level=info msg="TearDown network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" successfully" Mar 17 17:51:32.042748 containerd[1495]: time="2025-03-17T17:51:32.042533145Z" level=info msg="StopPodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" returns successfully" Mar 17 17:51:32.042513 systemd-networkd[1423]: cali6ff094f7b1d: Link UP Mar 17 17:51:32.043532 containerd[1495]: time="2025-03-17T17:51:32.043363414Z" level=info msg="RemovePodSandbox for \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\"" Mar 17 17:51:32.043532 containerd[1495]: time="2025-03-17T17:51:32.043385737Z" level=info msg="Forcibly stopping sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\"" Mar 17 17:51:32.043532 containerd[1495]: time="2025-03-17T17:51:32.043463794Z" level=info msg="TearDown network for sandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" successfully" Mar 17 17:51:32.043808 systemd-networkd[1423]: cali6ff094f7b1d: Gained carrier Mar 17 17:51:32.048492 containerd[1495]: time="2025-03-17T17:51:32.048453020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.048639 containerd[1495]: time="2025-03-17T17:51:32.048608543Z" level=info msg="RemovePodSandbox \"c3bc28f45d2ec64413f682e0fa3ae1f2815a1ace0cf71f660a835588639d5f9d\" returns successfully" Mar 17 17:51:32.049138 containerd[1495]: time="2025-03-17T17:51:32.049005159Z" level=info msg="StopPodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\"" Mar 17 17:51:32.049138 containerd[1495]: time="2025-03-17T17:51:32.049101970Z" level=info msg="TearDown network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" successfully" Mar 17 17:51:32.049138 containerd[1495]: time="2025-03-17T17:51:32.049111348Z" level=info msg="StopPodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" returns successfully" Mar 17 17:51:32.049473 containerd[1495]: time="2025-03-17T17:51:32.049452499Z" level=info msg="RemovePodSandbox for \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\"" Mar 17 17:51:32.049545 containerd[1495]: time="2025-03-17T17:51:32.049531898Z" level=info msg="Forcibly stopping sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\"" Mar 17 17:51:32.049679 containerd[1495]: time="2025-03-17T17:51:32.049642196Z" level=info msg="TearDown network for sandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" successfully" Mar 17 17:51:32.057102 systemd[1]: Started cri-containerd-69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177.scope - libcontainer container 69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177. Mar 17 17:51:32.059405 containerd[1495]: time="2025-03-17T17:51:32.059311288Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.059615 containerd[1495]: time="2025-03-17T17:51:32.059484705Z" level=info msg="RemovePodSandbox \"a9fcaae0af3522cc5fc57698f5f6bd14e963e141a90fc5f11a5c576ce8759dc0\" returns successfully" Mar 17 17:51:32.060653 containerd[1495]: time="2025-03-17T17:51:32.060630508Z" level=info msg="StopPodSandbox for \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\"" Mar 17 17:51:32.060872 containerd[1495]: time="2025-03-17T17:51:32.060830494Z" level=info msg="TearDown network for sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\" successfully" Mar 17 17:51:32.060872 containerd[1495]: time="2025-03-17T17:51:32.060848377Z" level=info msg="StopPodSandbox for \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\" returns successfully" Mar 17 17:51:32.063082 containerd[1495]: time="2025-03-17T17:51:32.061311047Z" level=info msg="RemovePodSandbox for \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\"" Mar 17 17:51:32.063082 containerd[1495]: time="2025-03-17T17:51:32.061335523Z" level=info msg="Forcibly stopping sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\"" Mar 17 17:51:32.063082 containerd[1495]: time="2025-03-17T17:51:32.061430852Z" level=info msg="TearDown network for sandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\" successfully" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:30.599 [INFO][4799] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:30.621 [INFO][4799] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9zh68-eth0 csi-node-driver- calico-system 8eeb7871-e618-4798-a87d-f7b3c9c67c97 657 0 2025-03-17 17:50:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:54877d75d5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9zh68 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6ff094f7b1d [] []}} ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Namespace="calico-system" Pod="csi-node-driver-9zh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--9zh68-" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:30.621 [INFO][4799] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Namespace="calico-system" Pod="csi-node-driver-9zh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--9zh68-eth0" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:30.836 [INFO][4835] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" HandleID="k8s-pod-network.e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Workload="localhost-k8s-csi--node--driver--9zh68-eth0" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:30.885 [INFO][4835] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" HandleID="k8s-pod-network.e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Workload="localhost-k8s-csi--node--driver--9zh68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cfb20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9zh68", "timestamp":"2025-03-17 17:51:30.836133946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:30.885 [INFO][4835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:31.915 [INFO][4835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:31.915 [INFO][4835] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:31.921 [INFO][4835] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" host="localhost" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:31.998 [INFO][4835] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:32.004 [INFO][4835] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:32.006 [INFO][4835] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:32.009 [INFO][4835] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:32.009 [INFO][4835] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" host="localhost" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:32.013 [INFO][4835] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6 Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:32.018 [INFO][4835] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" host="localhost" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:32.026 [INFO][4835] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" host="localhost" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:32.027 [INFO][4835] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" host="localhost" Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:32.027 [INFO][4835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:51:32.067380 containerd[1495]: 2025-03-17 17:51:32.027 [INFO][4835] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" HandleID="k8s-pod-network.e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Workload="localhost-k8s-csi--node--driver--9zh68-eth0" Mar 17 17:51:32.067930 containerd[1495]: 2025-03-17 17:51:32.034 [INFO][4799] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Namespace="calico-system" Pod="csi-node-driver-9zh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--9zh68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9zh68-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8eeb7871-e618-4798-a87d-f7b3c9c67c97", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9zh68", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ff094f7b1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:32.067930 containerd[1495]: 2025-03-17 17:51:32.034 [INFO][4799] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Namespace="calico-system" Pod="csi-node-driver-9zh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--9zh68-eth0" Mar 17 17:51:32.067930 containerd[1495]: 2025-03-17 17:51:32.034 [INFO][4799] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ff094f7b1d ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Namespace="calico-system" Pod="csi-node-driver-9zh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--9zh68-eth0" Mar 17 17:51:32.067930 containerd[1495]: 2025-03-17 17:51:32.046 [INFO][4799] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Namespace="calico-system" Pod="csi-node-driver-9zh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--9zh68-eth0" Mar 17 17:51:32.067930 containerd[1495]: 2025-03-17 17:51:32.047 [INFO][4799] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Namespace="calico-system" Pod="csi-node-driver-9zh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--9zh68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9zh68-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8eeb7871-e618-4798-a87d-f7b3c9c67c97", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6", Pod:"csi-node-driver-9zh68", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ff094f7b1d", MAC:"f6:3a:8c:28:8d:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:32.067930 containerd[1495]: 2025-03-17 17:51:32.065 [INFO][4799] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6" Namespace="calico-system" Pod="csi-node-driver-9zh68" WorkloadEndpoint="localhost-k8s-csi--node--driver--9zh68-eth0" Mar 17 17:51:32.069671 containerd[1495]: time="2025-03-17T17:51:32.069645311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.069778 containerd[1495]: time="2025-03-17T17:51:32.069763123Z" level=info msg="RemovePodSandbox \"f5565325feec65207a1acf2ef59fbe892b90a323d863ed912cb0dd3d09c20889\" returns successfully" Mar 17 17:51:32.070314 containerd[1495]: time="2025-03-17T17:51:32.070275165Z" level=info msg="StopPodSandbox for \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\"" Mar 17 17:51:32.070429 containerd[1495]: time="2025-03-17T17:51:32.070408375Z" level=info msg="TearDown network for sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\" successfully" Mar 17 17:51:32.070429 containerd[1495]: time="2025-03-17T17:51:32.070424185Z" level=info msg="StopPodSandbox for \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\" returns successfully" Mar 17 17:51:32.071554 containerd[1495]: time="2025-03-17T17:51:32.071534652Z" level=info msg="RemovePodSandbox for \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\"" Mar 17 17:51:32.071636 containerd[1495]: time="2025-03-17T17:51:32.071622908Z" level=info msg="Forcibly stopping sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\"" Mar 17 17:51:32.071902 containerd[1495]: time="2025-03-17T17:51:32.071859302Z" level=info msg="TearDown network for sandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\" successfully" Mar 17 17:51:32.077776 containerd[1495]: time="2025-03-17T17:51:32.077751938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.078078 containerd[1495]: time="2025-03-17T17:51:32.078061990Z" level=info msg="RemovePodSandbox \"611fa18f639a22f7c2dd78848bdc5dee201a8ab88e85e873a7047048763f300a\" returns successfully" Mar 17 17:51:32.078551 containerd[1495]: time="2025-03-17T17:51:32.078529008Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\"" Mar 17 17:51:32.078699 containerd[1495]: time="2025-03-17T17:51:32.078683107Z" level=info msg="TearDown network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" successfully" Mar 17 17:51:32.078771 containerd[1495]: time="2025-03-17T17:51:32.078753900Z" level=info msg="StopPodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" returns successfully" Mar 17 17:51:32.079178 containerd[1495]: time="2025-03-17T17:51:32.079151387Z" level=info msg="RemovePodSandbox for \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\"" Mar 17 17:51:32.079258 containerd[1495]: time="2025-03-17T17:51:32.079245714Z" level=info msg="Forcibly stopping sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\"" Mar 17 17:51:32.079384 containerd[1495]: time="2025-03-17T17:51:32.079350291Z" level=info msg="TearDown network for sandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" successfully" Mar 17 17:51:32.084748 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:51:32.088227 containerd[1495]: time="2025-03-17T17:51:32.088194864Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.088330 containerd[1495]: time="2025-03-17T17:51:32.088249487Z" level=info msg="RemovePodSandbox \"f3daeebfad93cd496e897f561ff8e055ad960f28c8676d594ffa89f640a4a005\" returns successfully" Mar 17 17:51:32.088898 containerd[1495]: time="2025-03-17T17:51:32.088777620Z" level=info msg="StopPodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\"" Mar 17 17:51:32.089066 containerd[1495]: time="2025-03-17T17:51:32.089051775Z" level=info msg="TearDown network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" successfully" Mar 17 17:51:32.089149 containerd[1495]: time="2025-03-17T17:51:32.089136605Z" level=info msg="StopPodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" returns successfully" Mar 17 17:51:32.089633 containerd[1495]: time="2025-03-17T17:51:32.089608511Z" level=info msg="RemovePodSandbox for \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\"" Mar 17 17:51:32.089775 containerd[1495]: time="2025-03-17T17:51:32.089761338Z" level=info msg="Forcibly stopping sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\"" Mar 17 17:51:32.089951 containerd[1495]: time="2025-03-17T17:51:32.089911140Z" level=info msg="TearDown network for sandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" successfully" Mar 17 17:51:32.097170 containerd[1495]: time="2025-03-17T17:51:32.097113186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.097346 containerd[1495]: time="2025-03-17T17:51:32.097326527Z" level=info msg="RemovePodSandbox \"7bca8f67a29d3302112647e05c33ad952b2e35e3e88e474c06f764720b805033\" returns successfully" Mar 17 17:51:32.097781 containerd[1495]: time="2025-03-17T17:51:32.097756685Z" level=info msg="StopPodSandbox for \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\"" Mar 17 17:51:32.097941 containerd[1495]: time="2025-03-17T17:51:32.097927046Z" level=info msg="TearDown network for sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\" successfully" Mar 17 17:51:32.097995 containerd[1495]: time="2025-03-17T17:51:32.097984173Z" level=info msg="StopPodSandbox for \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\" returns successfully" Mar 17 17:51:32.098325 containerd[1495]: time="2025-03-17T17:51:32.098308523Z" level=info msg="RemovePodSandbox for \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\"" Mar 17 17:51:32.098412 containerd[1495]: time="2025-03-17T17:51:32.098388493Z" level=info msg="Forcibly stopping sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\"" Mar 17 17:51:32.098557 containerd[1495]: time="2025-03-17T17:51:32.098519459Z" level=info msg="TearDown network for sandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\" successfully" Mar 17 17:51:32.103338 containerd[1495]: time="2025-03-17T17:51:32.103317447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.103448 containerd[1495]: time="2025-03-17T17:51:32.103433786Z" level=info msg="RemovePodSandbox \"20d5dae1d6c12324bb73944ca482776c3f59c00f5d43e5aa5f51a227cd0c58a1\" returns successfully" Mar 17 17:51:32.103725 containerd[1495]: time="2025-03-17T17:51:32.103708953Z" level=info msg="StopPodSandbox for \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\"" Mar 17 17:51:32.103848 containerd[1495]: time="2025-03-17T17:51:32.103833788Z" level=info msg="TearDown network for sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\" successfully" Mar 17 17:51:32.103931 containerd[1495]: time="2025-03-17T17:51:32.103910061Z" level=info msg="StopPodSandbox for \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\" returns successfully" Mar 17 17:51:32.104265 containerd[1495]: time="2025-03-17T17:51:32.104248777Z" level=info msg="RemovePodSandbox for \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\"" Mar 17 17:51:32.104341 containerd[1495]: time="2025-03-17T17:51:32.104328077Z" level=info msg="Forcibly stopping sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\"" Mar 17 17:51:32.104474 containerd[1495]: time="2025-03-17T17:51:32.104441489Z" level=info msg="TearDown network for sandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\" successfully" Mar 17 17:51:32.112821 containerd[1495]: time="2025-03-17T17:51:32.112664995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:51:32.112899 containerd[1495]: time="2025-03-17T17:51:32.112842409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:51:32.113827 containerd[1495]: time="2025-03-17T17:51:32.113760645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:32.114419 containerd[1495]: time="2025-03-17T17:51:32.114203938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:32.114701 containerd[1495]: time="2025-03-17T17:51:32.114678369Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.114845 containerd[1495]: time="2025-03-17T17:51:32.114820998Z" level=info msg="RemovePodSandbox \"4b1bcb5cdfc74a6a437aff803d44775132982a95f2b7c6175140e0c9f341858e\" returns successfully" Mar 17 17:51:32.122777 containerd[1495]: time="2025-03-17T17:51:32.122757373Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\"" Mar 17 17:51:32.122977 containerd[1495]: time="2025-03-17T17:51:32.122961998Z" level=info msg="TearDown network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" successfully" Mar 17 17:51:32.124064 containerd[1495]: time="2025-03-17T17:51:32.124048109Z" level=info msg="StopPodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" returns successfully" Mar 17 17:51:32.127319 containerd[1495]: time="2025-03-17T17:51:32.127299811Z" level=info msg="RemovePodSandbox for \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\"" Mar 17 17:51:32.127961 containerd[1495]: time="2025-03-17T17:51:32.127942008Z" level=info msg="Forcibly stopping sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\"" Mar 17 17:51:32.128612 containerd[1495]: time="2025-03-17T17:51:32.128524232Z" level=info msg="TearDown network for sandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" successfully" Mar 17 17:51:32.141256 systemd[1]: Started cri-containerd-e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6.scope - libcontainer container e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6. Mar 17 17:51:32.143199 containerd[1495]: time="2025-03-17T17:51:32.143159960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.143446 containerd[1495]: time="2025-03-17T17:51:32.143422774Z" level=info msg="RemovePodSandbox \"78661d8268b9ec071762ea23e31e21309122e61c5cbce70ba5e3a5ccaf5e1c2a\" returns successfully" Mar 17 17:51:32.145057 containerd[1495]: time="2025-03-17T17:51:32.144288511Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\"" Mar 17 17:51:32.145057 containerd[1495]: time="2025-03-17T17:51:32.144423435Z" level=info msg="TearDown network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" successfully" Mar 17 17:51:32.145057 containerd[1495]: time="2025-03-17T17:51:32.144434806Z" level=info msg="StopPodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" returns successfully" Mar 17 17:51:32.145057 containerd[1495]: time="2025-03-17T17:51:32.145047637Z" level=info msg="RemovePodSandbox for \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\"" Mar 17 17:51:32.145192 containerd[1495]: time="2025-03-17T17:51:32.145068898Z" level=info msg="Forcibly stopping sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\"" Mar 17 17:51:32.145192 containerd[1495]: time="2025-03-17T17:51:32.145132357Z" level=info msg="TearDown network for sandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" successfully" Mar 17 17:51:32.147027 systemd-networkd[1423]: cali394131fb631: Link UP Mar 17 17:51:32.147253 containerd[1495]: time="2025-03-17T17:51:32.147220361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ppl,Uid:b10ce8b2-d481-4335-85f1-af093a79a238,Namespace:kube-system,Attempt:5,} returns sandbox id \"69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177\"" Mar 17 17:51:32.148154 systemd-networkd[1423]: cali394131fb631: Gained carrier Mar 17 17:51:32.148642 kubelet[2599]: E0317 17:51:32.148610 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:32.150922 containerd[1495]: time="2025-03-17T17:51:32.150896140Z" level=info msg="CreateContainer within sandbox \"69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:51:32.158895 containerd[1495]: time="2025-03-17T17:51:32.158849898Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.159196 containerd[1495]: time="2025-03-17T17:51:32.159144934Z" level=info msg="RemovePodSandbox \"b90a88a1e5da255529675e67dad4d325788b366e74fc643f2c63b0f09c948b45\" returns successfully" Mar 17 17:51:32.160104 containerd[1495]: time="2025-03-17T17:51:32.159905312Z" level=info msg="StopPodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\"" Mar 17 17:51:32.160104 containerd[1495]: time="2025-03-17T17:51:32.160033143Z" level=info msg="TearDown network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" successfully" Mar 17 17:51:32.160104 containerd[1495]: time="2025-03-17T17:51:32.160047119Z" level=info msg="StopPodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" returns successfully" Mar 17 17:51:32.160880 containerd[1495]: time="2025-03-17T17:51:32.160624494Z" level=info msg="RemovePodSandbox for \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\"" Mar 17 17:51:32.160996 containerd[1495]: time="2025-03-17T17:51:32.160977227Z" level=info msg="Forcibly stopping sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\"" Mar 17 17:51:32.161366 containerd[1495]: time="2025-03-17T17:51:32.161259628Z" level=info msg="TearDown network for sandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" successfully" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:30.903 [INFO][4866] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:30.928 [INFO][4866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--db9856--fshz9-eth0 calico-apiserver-db9856- calico-apiserver e2616273-669f-41e6-aed5-5c36404c0a1a 785 0 2025-03-17 17:50:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:db9856 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-db9856-fshz9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali394131fb631 [] []}} ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-fshz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--fshz9-" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:30.929 [INFO][4866] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-fshz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--fshz9-eth0" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:30.969 [INFO][4880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" HandleID="k8s-pod-network.57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Workload="localhost-k8s-calico--apiserver--db9856--fshz9-eth0" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:31.077 [INFO][4880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" HandleID="k8s-pod-network.57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Workload="localhost-k8s-calico--apiserver--db9856--fshz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f59f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-db9856-fshz9", "timestamp":"2025-03-17 17:51:30.969436096 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:31.077 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.030 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.034 [INFO][4880] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.038 [INFO][4880] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" host="localhost" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.098 [INFO][4880] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.106 [INFO][4880] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.108 [INFO][4880] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.111 [INFO][4880] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.111 [INFO][4880] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" host="localhost" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.113 [INFO][4880] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.119 [INFO][4880] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" host="localhost" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.132 [INFO][4880] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" host="localhost" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.132 [INFO][4880] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" host="localhost" Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.132 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:51:32.171891 containerd[1495]: 2025-03-17 17:51:32.132 [INFO][4880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" HandleID="k8s-pod-network.57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Workload="localhost-k8s-calico--apiserver--db9856--fshz9-eth0" Mar 17 17:51:32.172977 containerd[1495]: 2025-03-17 17:51:32.142 [INFO][4866] cni-plugin/k8s.go 386: Populated endpoint ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-fshz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--fshz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--db9856--fshz9-eth0", GenerateName:"calico-apiserver-db9856-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2616273-669f-41e6-aed5-5c36404c0a1a", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db9856", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-db9856-fshz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali394131fb631", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:32.172977 containerd[1495]: 2025-03-17 17:51:32.142 [INFO][4866] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-fshz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--fshz9-eth0" Mar 17 17:51:32.172977 containerd[1495]: 2025-03-17 17:51:32.142 [INFO][4866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali394131fb631 ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-fshz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--fshz9-eth0" Mar 17 17:51:32.172977 containerd[1495]: 2025-03-17 17:51:32.149 [INFO][4866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-fshz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--fshz9-eth0" Mar 17 17:51:32.172977 containerd[1495]: 2025-03-17 17:51:32.152 [INFO][4866] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-fshz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--fshz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--db9856--fshz9-eth0", GenerateName:"calico-apiserver-db9856-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2616273-669f-41e6-aed5-5c36404c0a1a", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db9856", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b", Pod:"calico-apiserver-db9856-fshz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali394131fb631", MAC:"6e:37:fe:af:50:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:51:32.172977 containerd[1495]: 2025-03-17 17:51:32.166 [INFO][4866] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b" Namespace="calico-apiserver" Pod="calico-apiserver-db9856-fshz9" WorkloadEndpoint="localhost-k8s-calico--apiserver--db9856--fshz9-eth0" Mar 17 17:51:32.176954 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:51:32.179685 containerd[1495]: time="2025-03-17T17:51:32.178569431Z" level=info msg="CreateContainer within sandbox \"69e01cee8ce0c7aa13b8d9e80cefa48e638835c0ebb7a4f47fc504b91698f177\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b3f89280ab1d03d648c8e0469779b3abd9c7444ed18d657a960ea884001f828\"" Mar 17 17:51:32.179685 containerd[1495]: time="2025-03-17T17:51:32.179141045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.179685 containerd[1495]: time="2025-03-17T17:51:32.179207411Z" level=info msg="RemovePodSandbox \"2c183dfeb323cfd9dde4cc440f77974bf851926c3aa7c9e2accc48f0e0a01822\" returns successfully" Mar 17 17:51:32.182048 containerd[1495]: time="2025-03-17T17:51:32.180040285Z" level=info msg="StartContainer for \"0b3f89280ab1d03d648c8e0469779b3abd9c7444ed18d657a960ea884001f828\"" Mar 17 17:51:32.182419 containerd[1495]: time="2025-03-17T17:51:32.182390182Z" level=info msg="StopPodSandbox for \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\"" Mar 17 17:51:32.182562 containerd[1495]: time="2025-03-17T17:51:32.182547067Z" level=info msg="TearDown network for sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\" successfully" Mar 17 17:51:32.182620 containerd[1495]: time="2025-03-17T17:51:32.182608743Z" level=info msg="StopPodSandbox for \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\" returns successfully" Mar 17 17:51:32.185048 containerd[1495]: time="2025-03-17T17:51:32.184997212Z" level=info msg="RemovePodSandbox for \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\"" Mar 17 17:51:32.185392 containerd[1495]: time="2025-03-17T17:51:32.185375423Z" level=info msg="Forcibly stopping sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\"" Mar 17 17:51:32.186064 containerd[1495]: time="2025-03-17T17:51:32.185820259Z" level=info msg="TearDown network for sandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\" successfully" Mar 17 17:51:32.193994 containerd[1495]: time="2025-03-17T17:51:32.193946522Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.194139 containerd[1495]: time="2025-03-17T17:51:32.194033515Z" level=info msg="RemovePodSandbox \"fbac00f9780cef4d54294d3c83e7122b526db0b29a522ca87666b0c7e7b9c4a0\" returns successfully" Mar 17 17:51:32.195764 containerd[1495]: time="2025-03-17T17:51:32.195738200Z" level=info msg="StopPodSandbox for \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\"" Mar 17 17:51:32.195922 containerd[1495]: time="2025-03-17T17:51:32.195906476Z" level=info msg="TearDown network for sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\" successfully" Mar 17 17:51:32.195978 containerd[1495]: time="2025-03-17T17:51:32.195963724Z" level=info msg="StopPodSandbox for \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\" returns successfully" Mar 17 17:51:32.196431 containerd[1495]: time="2025-03-17T17:51:32.196413669Z" level=info msg="RemovePodSandbox for \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\"" Mar 17 17:51:32.196497 containerd[1495]: time="2025-03-17T17:51:32.196484402Z" level=info msg="Forcibly stopping sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\"" Mar 17 17:51:32.196661 containerd[1495]: time="2025-03-17T17:51:32.196612293Z" level=info msg="TearDown network for sandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\" successfully" Mar 17 17:51:32.206819 containerd[1495]: time="2025-03-17T17:51:32.206776505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.207067 containerd[1495]: time="2025-03-17T17:51:32.206999304Z" level=info msg="RemovePodSandbox \"d4e54e69b2c6c0978d76e94b92a77df80dc1c72566f44f295fe6bd35b1e61a73\" returns successfully" Mar 17 17:51:32.207761 containerd[1495]: time="2025-03-17T17:51:32.207741911Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\"" Mar 17 17:51:32.211068 containerd[1495]: time="2025-03-17T17:51:32.211049306Z" level=info msg="TearDown network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" successfully" Mar 17 17:51:32.211132 containerd[1495]: time="2025-03-17T17:51:32.211119008Z" level=info msg="StopPodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" returns successfully" Mar 17 17:51:32.211802 containerd[1495]: time="2025-03-17T17:51:32.211781954Z" level=info msg="RemovePodSandbox for \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\"" Mar 17 17:51:32.211884 containerd[1495]: time="2025-03-17T17:51:32.211870690Z" level=info msg="Forcibly stopping sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\"" Mar 17 17:51:32.212066 containerd[1495]: time="2025-03-17T17:51:32.212025912Z" level=info msg="TearDown network for sandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" successfully" Mar 17 17:51:32.212560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351543598.mount: Deactivated successfully. Mar 17 17:51:32.216135 containerd[1495]: time="2025-03-17T17:51:32.215266542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:51:32.216135 containerd[1495]: time="2025-03-17T17:51:32.215331495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:51:32.216135 containerd[1495]: time="2025-03-17T17:51:32.215345531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:32.216135 containerd[1495]: time="2025-03-17T17:51:32.215443605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:32.221805 containerd[1495]: time="2025-03-17T17:51:32.221751631Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.225148 systemd[1]: Started cri-containerd-0b3f89280ab1d03d648c8e0469779b3abd9c7444ed18d657a960ea884001f828.scope - libcontainer container 0b3f89280ab1d03d648c8e0469779b3abd9c7444ed18d657a960ea884001f828. Mar 17 17:51:32.236126 containerd[1495]: time="2025-03-17T17:51:32.236072698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9zh68,Uid:8eeb7871-e618-4798-a87d-f7b3c9c67c97,Namespace:calico-system,Attempt:6,} returns sandbox id \"e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6\"" Mar 17 17:51:32.236742 containerd[1495]: time="2025-03-17T17:51:32.236706289Z" level=info msg="RemovePodSandbox \"ec8dc90f4256e5e13ea71e3c83af7eded25b71eda92de161921988d536de088c\" returns successfully" Mar 17 17:51:32.242311 containerd[1495]: time="2025-03-17T17:51:32.242261919Z" level=info msg="StopPodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\"" Mar 17 17:51:32.243416 containerd[1495]: time="2025-03-17T17:51:32.243347971Z" level=info msg="TearDown network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" successfully" Mar 17 17:51:32.243467 containerd[1495]: time="2025-03-17T17:51:32.243431118Z" level=info msg="StopPodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" returns successfully" Mar 17 17:51:32.244742 containerd[1495]: time="2025-03-17T17:51:32.244543819Z" level=info msg="RemovePodSandbox for \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\"" Mar 17 17:51:32.244742 containerd[1495]: time="2025-03-17T17:51:32.244666739Z" level=info msg="Forcibly stopping sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\"" Mar 17 17:51:32.245955 containerd[1495]: time="2025-03-17T17:51:32.245443189Z" level=info msg="TearDown network for sandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" successfully" Mar 17 17:51:32.259717 containerd[1495]: time="2025-03-17T17:51:32.259535986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.259911 containerd[1495]: time="2025-03-17T17:51:32.259882807Z" level=info msg="RemovePodSandbox \"fa0ee85b093d74ef947ec6970c5d27144b44a194911f7cd2536631bd04a90cc6\" returns successfully" Mar 17 17:51:32.260810 containerd[1495]: time="2025-03-17T17:51:32.260594917Z" level=info msg="StopPodSandbox for \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\"" Mar 17 17:51:32.260810 containerd[1495]: time="2025-03-17T17:51:32.260729399Z" level=info msg="TearDown network for sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\" successfully" Mar 17 17:51:32.260810 containerd[1495]: time="2025-03-17T17:51:32.260740880Z" level=info msg="StopPodSandbox for \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\" returns successfully" Mar 17 17:51:32.261195 containerd[1495]: time="2025-03-17T17:51:32.261169135Z" level=info msg="RemovePodSandbox for \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\"" Mar 17 17:51:32.261264 containerd[1495]: time="2025-03-17T17:51:32.261196927Z" level=info msg="Forcibly stopping sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\"" Mar 17 17:51:32.261353 containerd[1495]: time="2025-03-17T17:51:32.261281006Z" level=info msg="TearDown network for sandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\" successfully" Mar 17 17:51:32.266188 containerd[1495]: time="2025-03-17T17:51:32.266054277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.266188 containerd[1495]: time="2025-03-17T17:51:32.266107396Z" level=info msg="RemovePodSandbox \"c718459327930e395a835f102ab401f2e48060fe2c2c6cbef6ad483b0c4e740e\" returns successfully" Mar 17 17:51:32.266266 systemd[1]: Started cri-containerd-57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b.scope - libcontainer container 57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b. Mar 17 17:51:32.267207 containerd[1495]: time="2025-03-17T17:51:32.267166096Z" level=info msg="StopPodSandbox for \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\"" Mar 17 17:51:32.271102 containerd[1495]: time="2025-03-17T17:51:32.267317321Z" level=info msg="TearDown network for sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\" successfully" Mar 17 17:51:32.271102 containerd[1495]: time="2025-03-17T17:51:32.267403643Z" level=info msg="StopPodSandbox for \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\" returns successfully" Mar 17 17:51:32.277777 containerd[1495]: time="2025-03-17T17:51:32.277725001Z" level=info msg="RemovePodSandbox for \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\"" Mar 17 17:51:32.277964 containerd[1495]: time="2025-03-17T17:51:32.277950326Z" level=info msg="Forcibly stopping sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\"" Mar 17 17:51:32.278212 containerd[1495]: time="2025-03-17T17:51:32.278150671Z" level=info msg="TearDown network for sandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\" successfully" Mar 17 17:51:32.303667 containerd[1495]: time="2025-03-17T17:51:32.303614811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:51:32.304511 containerd[1495]: time="2025-03-17T17:51:32.304097178Z" level=info msg="RemovePodSandbox \"7b5c001b9f3d01ac15620c2f92a043c0fc3844ae4e4b16b89c4e37e5f905e7be\" returns successfully" Mar 17 17:51:32.304511 containerd[1495]: time="2025-03-17T17:51:32.304173271Z" level=info msg="StartContainer for \"0b3f89280ab1d03d648c8e0469779b3abd9c7444ed18d657a960ea884001f828\" returns successfully" Mar 17 17:51:32.308556 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:51:32.350149 containerd[1495]: time="2025-03-17T17:51:32.349808742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db9856-fshz9,Uid:e2616273-669f-41e6-aed5-5c36404c0a1a,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b\"" Mar 17 17:51:32.391211 systemd-networkd[1423]: vxlan.calico: Link UP Mar 17 17:51:32.391224 systemd-networkd[1423]: vxlan.calico: Gained carrier Mar 17 17:51:32.456580 kubelet[2599]: E0317 17:51:32.456539 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:32.465253 kubelet[2599]: E0317 17:51:32.464890 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:32.491024 kubelet[2599]: I0317 17:51:32.490820 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nk8jr" podStartSLOduration=57.490800632 podStartE2EDuration="57.490800632s" podCreationTimestamp="2025-03-17 17:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:51:32.47377858 +0000 UTC m=+60.882224548" watchObservedRunningTime="2025-03-17 17:51:32.490800632 +0000 UTC m=+60.899246600" Mar 17 17:51:32.491550 kubelet[2599]: I0317 17:51:32.491403 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t9ppl" podStartSLOduration=57.491394469 podStartE2EDuration="57.491394469s" podCreationTimestamp="2025-03-17 17:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:51:32.487739178 +0000 UTC m=+60.896185136" watchObservedRunningTime="2025-03-17 17:51:32.491394469 +0000 UTC m=+60.899840437" Mar 17 17:51:32.780690 systemd[1]: Started sshd@9-10.0.0.104:22-10.0.0.1:57210.service - OpenSSH per-connection server daemon (10.0.0.1:57210). Mar 17 17:51:32.851495 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 57210 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:51:32.853976 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:32.859142 systemd-logind[1479]: New session 10 of user core. Mar 17 17:51:32.868329 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:51:32.881156 systemd-networkd[1423]: cali899860f80b6: Gained IPv6LL Mar 17 17:51:33.011119 systemd-networkd[1423]: cali5bab4aeb827: Gained IPv6LL Mar 17 17:51:33.023184 sshd[5526]: Connection closed by 10.0.0.1 port 57210 Mar 17 17:51:33.023728 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:33.030831 systemd[1]: sshd@9-10.0.0.104:22-10.0.0.1:57210.service: Deactivated successfully. Mar 17 17:51:33.037650 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:51:33.038526 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:51:33.039576 systemd-logind[1479]: Removed session 10. Mar 17 17:51:33.393518 systemd-networkd[1423]: calid73f4f4eba2: Gained IPv6LL Mar 17 17:51:33.470708 kubelet[2599]: E0317 17:51:33.470673 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:33.471249 kubelet[2599]: E0317 17:51:33.470793 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:33.521306 systemd-networkd[1423]: cali6ff094f7b1d: Gained IPv6LL Mar 17 17:51:33.614669 containerd[1495]: time="2025-03-17T17:51:33.614593343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:33.615750 containerd[1495]: time="2025-03-17T17:51:33.615687751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 17 17:51:33.617346 containerd[1495]: time="2025-03-17T17:51:33.617316184Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:33.619689 containerd[1495]: time="2025-03-17T17:51:33.619658530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:33.620330 containerd[1495]: time="2025-03-17T17:51:33.620305878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 2.533941685s" Mar 17 17:51:33.620393 containerd[1495]: time="2025-03-17T17:51:33.620334882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 17 17:51:33.621376 containerd[1495]: time="2025-03-17T17:51:33.621332028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:51:33.630044 containerd[1495]: time="2025-03-17T17:51:33.629974304Z" level=info msg="CreateContainer within sandbox \"4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 17:51:33.646005 containerd[1495]: time="2025-03-17T17:51:33.645785165Z" level=info msg="CreateContainer within sandbox \"4c46ecd4219ac5783f359e7d1646f93d5def229adaf908c09535f766c57a540b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c1cb4410acf11579e6b85689c1534a46d36a7f6fecddf644ae706ddb09a42759\"" Mar 17 17:51:33.646424 containerd[1495]: time="2025-03-17T17:51:33.646356510Z" level=info msg="StartContainer for \"c1cb4410acf11579e6b85689c1534a46d36a7f6fecddf644ae706ddb09a42759\"" Mar 17 17:51:33.649406 systemd-networkd[1423]: cali86c90edc25c: Gained IPv6LL Mar 17 17:51:33.681175 systemd[1]: Started cri-containerd-c1cb4410acf11579e6b85689c1534a46d36a7f6fecddf644ae706ddb09a42759.scope - libcontainer container c1cb4410acf11579e6b85689c1534a46d36a7f6fecddf644ae706ddb09a42759. Mar 17 17:51:33.713532 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL Mar 17 17:51:33.736138 containerd[1495]: time="2025-03-17T17:51:33.736080261Z" level=info msg="StartContainer for \"c1cb4410acf11579e6b85689c1534a46d36a7f6fecddf644ae706ddb09a42759\" returns successfully" Mar 17 17:51:34.097274 systemd-networkd[1423]: cali394131fb631: Gained IPv6LL Mar 17 17:51:34.478997 kubelet[2599]: E0317 17:51:34.478845 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:34.496029 kubelet[2599]: I0317 17:51:34.495880 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d6b67b85-j5xwp" podStartSLOduration=43.960584942 podStartE2EDuration="46.495859313s" podCreationTimestamp="2025-03-17 17:50:48 +0000 UTC" firstStartedPulling="2025-03-17 17:51:31.085912955 +0000 UTC m=+59.494358923" lastFinishedPulling="2025-03-17 17:51:33.621187326 +0000 UTC m=+62.029633294" observedRunningTime="2025-03-17 17:51:34.49491757 +0000 UTC m=+62.903363538" watchObservedRunningTime="2025-03-17 17:51:34.495859313 +0000 UTC m=+62.904305281" Mar 17 17:51:36.241127 containerd[1495]: time="2025-03-17T17:51:36.241049167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:36.242112 containerd[1495]: time="2025-03-17T17:51:36.242058180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 17 17:51:36.243357 containerd[1495]: time="2025-03-17T17:51:36.243294430Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:36.245423 containerd[1495]: time="2025-03-17T17:51:36.245375795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:36.246045 containerd[1495]: time="2025-03-17T17:51:36.246002345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 2.624636122s" Mar 17 17:51:36.246114 containerd[1495]: time="2025-03-17T17:51:36.246048713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 17 17:51:36.247043 containerd[1495]: time="2025-03-17T17:51:36.246995709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:51:36.248035 containerd[1495]: time="2025-03-17T17:51:36.247981167Z" level=info msg="CreateContainer within sandbox \"f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:51:36.263610 containerd[1495]: time="2025-03-17T17:51:36.263555920Z" level=info msg="CreateContainer within sandbox \"f065516ea452ed4b804ad17b255116335dbf8f890933b1970356ea69bfd6f3ce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6fc9637f1d571a7e751e21bfe83bccc47ed39b46ec495137c4a492f04d32c2a2\"" Mar 17 17:51:36.264453 containerd[1495]: time="2025-03-17T17:51:36.264402727Z" level=info msg="StartContainer for \"6fc9637f1d571a7e751e21bfe83bccc47ed39b46ec495137c4a492f04d32c2a2\"" Mar 17 17:51:36.304201 systemd[1]: Started cri-containerd-6fc9637f1d571a7e751e21bfe83bccc47ed39b46ec495137c4a492f04d32c2a2.scope - libcontainer container 6fc9637f1d571a7e751e21bfe83bccc47ed39b46ec495137c4a492f04d32c2a2. Mar 17 17:51:36.350556 containerd[1495]: time="2025-03-17T17:51:36.350502150Z" level=info msg="StartContainer for \"6fc9637f1d571a7e751e21bfe83bccc47ed39b46ec495137c4a492f04d32c2a2\" returns successfully" Mar 17 17:51:37.508432 kubelet[2599]: I0317 17:51:37.508336 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-db9856-swh96" podStartSLOduration=45.283144363 podStartE2EDuration="49.508311s" podCreationTimestamp="2025-03-17 17:50:48 +0000 UTC" firstStartedPulling="2025-03-17 17:51:32.021691312 +0000 UTC m=+60.430137280" lastFinishedPulling="2025-03-17 17:51:36.246857939 +0000 UTC m=+64.655303917" observedRunningTime="2025-03-17 17:51:36.534091365 +0000 UTC m=+64.942537353" watchObservedRunningTime="2025-03-17 17:51:37.508311 +0000 UTC m=+65.916756968" Mar 17 17:51:38.039233 systemd[1]: Started sshd@10-10.0.0.104:22-10.0.0.1:51908.service - OpenSSH per-connection server daemon (10.0.0.1:51908). Mar 17 17:51:38.106512 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 51908 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:51:38.108222 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:38.112445 systemd-logind[1479]: New session 11 of user core. Mar 17 17:51:38.123152 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:51:38.256624 sshd[5666]: Connection closed by 10.0.0.1 port 51908 Mar 17 17:51:38.257805 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:38.267643 systemd[1]: sshd@10-10.0.0.104:22-10.0.0.1:51908.service: Deactivated successfully. Mar 17 17:51:38.269790 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:51:38.271501 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:51:38.280646 systemd[1]: Started sshd@11-10.0.0.104:22-10.0.0.1:51912.service - OpenSSH per-connection server daemon (10.0.0.1:51912). Mar 17 17:51:38.281957 systemd-logind[1479]: Removed session 11. Mar 17 17:51:38.322295 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 51912 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:51:38.323955 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:38.330870 systemd-logind[1479]: New session 12 of user core. Mar 17 17:51:38.339282 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:51:38.741276 sshd[5681]: Connection closed by 10.0.0.1 port 51912 Mar 17 17:51:38.741794 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:38.755877 systemd[1]: sshd@11-10.0.0.104:22-10.0.0.1:51912.service: Deactivated successfully. Mar 17 17:51:38.758748 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:51:38.761226 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:51:38.768496 systemd[1]: Started sshd@12-10.0.0.104:22-10.0.0.1:51928.service - OpenSSH per-connection server daemon (10.0.0.1:51928). Mar 17 17:51:38.769361 systemd-logind[1479]: Removed session 12. Mar 17 17:51:38.820680 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 51928 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:51:38.822347 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:38.826553 systemd-logind[1479]: New session 13 of user core. Mar 17 17:51:38.836131 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:51:38.991578 sshd[5700]: Connection closed by 10.0.0.1 port 51928 Mar 17 17:51:38.991949 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:39.015233 systemd[1]: sshd@12-10.0.0.104:22-10.0.0.1:51928.service: Deactivated successfully. Mar 17 17:51:39.018411 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:51:39.019209 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:51:39.020357 systemd-logind[1479]: Removed session 13. Mar 17 17:51:39.646423 containerd[1495]: time="2025-03-17T17:51:39.646341720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:39.673583 containerd[1495]: time="2025-03-17T17:51:39.673488195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 17:51:39.676223 containerd[1495]: time="2025-03-17T17:51:39.676149974Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:39.691070 containerd[1495]: time="2025-03-17T17:51:39.690965213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:39.691788 containerd[1495]: time="2025-03-17T17:51:39.691724328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 3.444688964s" Mar 17 17:51:39.691873 containerd[1495]: time="2025-03-17T17:51:39.691797716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 17:51:39.693182 containerd[1495]: time="2025-03-17T17:51:39.693154189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:51:39.694509 containerd[1495]: time="2025-03-17T17:51:39.694473772Z" level=info msg="CreateContainer within sandbox \"e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:51:39.779234 containerd[1495]: time="2025-03-17T17:51:39.779173128Z" level=info msg="CreateContainer within sandbox \"e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"03410c16991b38303e4ab76e45dada6914114e49487cc1d415f4bd3bf6a5b5a6\"" Mar 17 17:51:39.779945 containerd[1495]: time="2025-03-17T17:51:39.779908808Z" level=info msg="StartContainer for \"03410c16991b38303e4ab76e45dada6914114e49487cc1d415f4bd3bf6a5b5a6\"" Mar 17 17:51:39.817188 systemd[1]: Started cri-containerd-03410c16991b38303e4ab76e45dada6914114e49487cc1d415f4bd3bf6a5b5a6.scope - libcontainer container 03410c16991b38303e4ab76e45dada6914114e49487cc1d415f4bd3bf6a5b5a6. Mar 17 17:51:39.851602 containerd[1495]: time="2025-03-17T17:51:39.851557680Z" level=info msg="StartContainer for \"03410c16991b38303e4ab76e45dada6914114e49487cc1d415f4bd3bf6a5b5a6\" returns successfully" Mar 17 17:51:40.154948 containerd[1495]: time="2025-03-17T17:51:40.154887710Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:40.155638 containerd[1495]: time="2025-03-17T17:51:40.155587733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 17 17:51:40.157533 containerd[1495]: time="2025-03-17T17:51:40.157489669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 464.237996ms" Mar 17 17:51:40.157533 containerd[1495]: time="2025-03-17T17:51:40.157517251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 17 17:51:40.158426 containerd[1495]: time="2025-03-17T17:51:40.158400831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:51:40.159402 containerd[1495]: time="2025-03-17T17:51:40.159374271Z" level=info msg="CreateContainer within sandbox \"57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:51:40.171518 containerd[1495]: time="2025-03-17T17:51:40.171474736Z" level=info msg="CreateContainer within sandbox \"57f5f7e249fd944958034e4ebe99e9f918c4a0d9a66fb988f6400155277c053b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b7e8c96a616cd30a0e63ce9256af134e0a6acbdc06c1d7ebd226b2b0c64607fd\"" Mar 17 17:51:40.171989 containerd[1495]: time="2025-03-17T17:51:40.171939756Z" level=info msg="StartContainer for \"b7e8c96a616cd30a0e63ce9256af134e0a6acbdc06c1d7ebd226b2b0c64607fd\"" Mar 17 17:51:40.202209 systemd[1]: Started cri-containerd-b7e8c96a616cd30a0e63ce9256af134e0a6acbdc06c1d7ebd226b2b0c64607fd.scope - libcontainer container b7e8c96a616cd30a0e63ce9256af134e0a6acbdc06c1d7ebd226b2b0c64607fd. Mar 17 17:51:40.244053 containerd[1495]: time="2025-03-17T17:51:40.243796167Z" level=info msg="StartContainer for \"b7e8c96a616cd30a0e63ce9256af134e0a6acbdc06c1d7ebd226b2b0c64607fd\" returns successfully" Mar 17 17:51:41.498360 kubelet[2599]: I0317 17:51:41.498316 2599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:51:41.849606 kubelet[2599]: E0317 17:51:41.849576 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:41.961993 containerd[1495]: time="2025-03-17T17:51:41.961900003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:41.963450 containerd[1495]: time="2025-03-17T17:51:41.963344506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 17:51:41.966961 containerd[1495]: time="2025-03-17T17:51:41.966924869Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:41.969990 containerd[1495]: time="2025-03-17T17:51:41.969943729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:41.970710 containerd[1495]: time="2025-03-17T17:51:41.970659112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.812228314s" Mar 17 17:51:41.970710 containerd[1495]: time="2025-03-17T17:51:41.970702034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 17:51:41.973588 containerd[1495]: time="2025-03-17T17:51:41.973511398Z" level=info msg="CreateContainer within sandbox \"e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:51:42.034416 containerd[1495]: time="2025-03-17T17:51:42.033756168Z" level=info msg="CreateContainer within sandbox \"e72caab7569f4216f419c4c109d6552e20a12d8b549da571f32dec78490632b6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1943d324485d8dafcf39d9186d1e4210d16df687413e5b0b6ac1c3182c0b23af\"" Mar 17 17:51:42.038637 containerd[1495]: time="2025-03-17T17:51:42.038601500Z" level=info msg="StartContainer for \"1943d324485d8dafcf39d9186d1e4210d16df687413e5b0b6ac1c3182c0b23af\"" Mar 17 17:51:42.086354 systemd[1]: Started cri-containerd-1943d324485d8dafcf39d9186d1e4210d16df687413e5b0b6ac1c3182c0b23af.scope - libcontainer container 1943d324485d8dafcf39d9186d1e4210d16df687413e5b0b6ac1c3182c0b23af. Mar 17 17:51:42.128706 containerd[1495]: time="2025-03-17T17:51:42.128494799Z" level=info msg="StartContainer for \"1943d324485d8dafcf39d9186d1e4210d16df687413e5b0b6ac1c3182c0b23af\" returns successfully" Mar 17 17:51:42.556089 kubelet[2599]: I0317 17:51:42.555575 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-db9856-fshz9" podStartSLOduration=46.748786615 podStartE2EDuration="54.555555858s" podCreationTimestamp="2025-03-17 17:50:48 +0000 UTC" firstStartedPulling="2025-03-17 17:51:32.351495341 +0000 UTC m=+60.759941309" lastFinishedPulling="2025-03-17 17:51:40.158264564 +0000 UTC m=+68.566710552" observedRunningTime="2025-03-17 17:51:40.505298447 +0000 UTC m=+68.913744415" watchObservedRunningTime="2025-03-17 17:51:42.555555858 +0000 UTC m=+70.964001826" Mar 17 17:51:43.027148 kubelet[2599]: I0317 17:51:43.027087 2599 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:51:43.027148 kubelet[2599]: I0317 17:51:43.027144 2599 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:51:44.004417 systemd[1]: Started sshd@13-10.0.0.104:22-10.0.0.1:51938.service - OpenSSH per-connection server daemon (10.0.0.1:51938). Mar 17 17:51:44.076335 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 51938 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:51:44.077957 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:44.081828 systemd-logind[1479]: New session 14 of user core. Mar 17 17:51:44.092168 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:51:44.252007 sshd[5843]: Connection closed by 10.0.0.1 port 51938 Mar 17 17:51:44.252426 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:44.257522 systemd[1]: sshd@13-10.0.0.104:22-10.0.0.1:51938.service: Deactivated successfully. Mar 17 17:51:44.260477 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:51:44.262688 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:51:44.264704 systemd-logind[1479]: Removed session 14. Mar 17 17:51:49.302629 systemd[1]: Started sshd@14-10.0.0.104:22-10.0.0.1:40982.service - OpenSSH per-connection server daemon (10.0.0.1:40982). Mar 17 17:51:49.377251 sshd[5855]: Accepted publickey for core from 10.0.0.1 port 40982 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:51:49.382197 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:49.407641 systemd-logind[1479]: New session 15 of user core. Mar 17 17:51:49.463009 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:51:49.757705 sshd[5857]: Connection closed by 10.0.0.1 port 40982 Mar 17 17:51:49.758224 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:49.764468 systemd[1]: sshd@14-10.0.0.104:22-10.0.0.1:40982.service: Deactivated successfully. Mar 17 17:51:49.768747 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:51:49.770774 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:51:49.775872 systemd-logind[1479]: Removed session 15. Mar 17 17:51:53.849205 kubelet[2599]: E0317 17:51:53.849132 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:54.772207 systemd[1]: Started sshd@15-10.0.0.104:22-10.0.0.1:40990.service - OpenSSH per-connection server daemon (10.0.0.1:40990). Mar 17 17:51:54.844108 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 40990 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:51:54.845753 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:54.849733 kubelet[2599]: E0317 17:51:54.849693 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:51:54.850533 systemd-logind[1479]: New session 16 of user core. Mar 17 17:51:54.858193 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:51:55.031648 sshd[5883]: Connection closed by 10.0.0.1 port 40990 Mar 17 17:51:55.032126 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:55.036375 systemd[1]: sshd@15-10.0.0.104:22-10.0.0.1:40990.service: Deactivated successfully. Mar 17 17:51:55.038608 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:51:55.039314 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:51:55.040246 systemd-logind[1479]: Removed session 16. Mar 17 17:51:56.565324 kubelet[2599]: I0317 17:51:56.565265 2599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:51:56.775402 kubelet[2599]: I0317 17:51:56.775329 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9zh68" podStartSLOduration=59.051728755 podStartE2EDuration="1m8.775290221s" podCreationTimestamp="2025-03-17 17:50:48 +0000 UTC" firstStartedPulling="2025-03-17 17:51:32.248061049 +0000 UTC m=+60.656507017" lastFinishedPulling="2025-03-17 17:51:41.971622515 +0000 UTC m=+70.380068483" observedRunningTime="2025-03-17 17:51:42.556061685 +0000 UTC m=+70.964507663" watchObservedRunningTime="2025-03-17 17:51:56.775290221 +0000 UTC m=+85.183736189" Mar 17 17:51:59.849301 kubelet[2599]: E0317 17:51:59.849264 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:00.047321 systemd[1]: Started sshd@16-10.0.0.104:22-10.0.0.1:47154.service - OpenSSH per-connection server daemon (10.0.0.1:47154). Mar 17 17:52:00.105945 sshd[5897]: Accepted publickey for core from 10.0.0.1 port 47154 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:00.107671 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:00.112202 systemd-logind[1479]: New session 17 of user core. Mar 17 17:52:00.121206 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:52:00.250295 sshd[5899]: Connection closed by 10.0.0.1 port 47154 Mar 17 17:52:00.250694 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:00.257163 systemd[1]: sshd@16-10.0.0.104:22-10.0.0.1:47154.service: Deactivated successfully. Mar 17 17:52:00.259476 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:52:00.260253 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:52:00.261128 systemd-logind[1479]: Removed session 17. Mar 17 17:52:01.546564 kubelet[2599]: E0317 17:52:01.546528 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:05.262331 systemd[1]: Started sshd@17-10.0.0.104:22-10.0.0.1:47168.service - OpenSSH per-connection server daemon (10.0.0.1:47168). Mar 17 17:52:05.324542 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 47168 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:05.326231 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:05.330466 systemd-logind[1479]: New session 18 of user core. Mar 17 17:52:05.342170 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:52:05.460578 sshd[5957]: Connection closed by 10.0.0.1 port 47168 Mar 17 17:52:05.461070 sshd-session[5955]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:05.471094 systemd[1]: sshd@17-10.0.0.104:22-10.0.0.1:47168.service: Deactivated successfully. Mar 17 17:52:05.473118 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:52:05.474907 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:52:05.480301 systemd[1]: Started sshd@18-10.0.0.104:22-10.0.0.1:47174.service - OpenSSH per-connection server daemon (10.0.0.1:47174). Mar 17 17:52:05.481343 systemd-logind[1479]: Removed session 18. Mar 17 17:52:05.523599 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 47174 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:05.525486 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:05.530163 systemd-logind[1479]: New session 19 of user core. Mar 17 17:52:05.538212 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:52:05.860133 sshd[5971]: Connection closed by 10.0.0.1 port 47174 Mar 17 17:52:05.860587 sshd-session[5969]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:05.872552 systemd[1]: sshd@18-10.0.0.104:22-10.0.0.1:47174.service: Deactivated successfully. Mar 17 17:52:05.874507 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:52:05.876456 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:52:05.883567 systemd[1]: Started sshd@19-10.0.0.104:22-10.0.0.1:44704.service - OpenSSH per-connection server daemon (10.0.0.1:44704). Mar 17 17:52:05.885409 systemd-logind[1479]: Removed session 19. Mar 17 17:52:05.936640 sshd[5982]: Accepted publickey for core from 10.0.0.1 port 44704 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:05.938453 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:05.943099 systemd-logind[1479]: New session 20 of user core. Mar 17 17:52:05.955176 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:52:06.864905 sshd[5984]: Connection closed by 10.0.0.1 port 44704 Mar 17 17:52:06.866105 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:06.874810 systemd[1]: sshd@19-10.0.0.104:22-10.0.0.1:44704.service: Deactivated successfully. Mar 17 17:52:06.878920 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:52:06.883145 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:52:06.895459 systemd[1]: Started sshd@20-10.0.0.104:22-10.0.0.1:44720.service - OpenSSH per-connection server daemon (10.0.0.1:44720). Mar 17 17:52:06.896509 systemd-logind[1479]: Removed session 20. Mar 17 17:52:06.937613 sshd[6015]: Accepted publickey for core from 10.0.0.1 port 44720 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:06.939246 sshd-session[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:06.942962 systemd-logind[1479]: New session 21 of user core. Mar 17 17:52:06.951143 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:52:07.166525 sshd[6018]: Connection closed by 10.0.0.1 port 44720 Mar 17 17:52:07.168224 sshd-session[6015]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:07.176447 systemd[1]: sshd@20-10.0.0.104:22-10.0.0.1:44720.service: Deactivated successfully. Mar 17 17:52:07.179001 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:52:07.181234 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:52:07.193479 systemd[1]: Started sshd@21-10.0.0.104:22-10.0.0.1:44734.service - OpenSSH per-connection server daemon (10.0.0.1:44734). Mar 17 17:52:07.194699 systemd-logind[1479]: Removed session 21. Mar 17 17:52:07.234354 sshd[6028]: Accepted publickey for core from 10.0.0.1 port 44734 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:07.236126 sshd-session[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:07.240322 systemd-logind[1479]: New session 22 of user core. Mar 17 17:52:07.252150 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:52:07.365954 sshd[6030]: Connection closed by 10.0.0.1 port 44734 Mar 17 17:52:07.366371 sshd-session[6028]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:07.370733 systemd[1]: sshd@21-10.0.0.104:22-10.0.0.1:44734.service: Deactivated successfully. Mar 17 17:52:07.373871 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:52:07.374538 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:52:07.375850 systemd-logind[1479]: Removed session 22. Mar 17 17:52:12.378607 systemd[1]: Started sshd@22-10.0.0.104:22-10.0.0.1:44738.service - OpenSSH per-connection server daemon (10.0.0.1:44738). Mar 17 17:52:12.423996 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 44738 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:12.426063 sshd-session[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:12.431034 systemd-logind[1479]: New session 23 of user core. Mar 17 17:52:12.448305 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:52:12.568528 sshd[6047]: Connection closed by 10.0.0.1 port 44738 Mar 17 17:52:12.568956 sshd-session[6045]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:12.574575 systemd[1]: sshd@22-10.0.0.104:22-10.0.0.1:44738.service: Deactivated successfully. Mar 17 17:52:12.577878 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:52:12.578772 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:52:12.580117 systemd-logind[1479]: Removed session 23. Mar 17 17:52:17.581231 systemd[1]: Started sshd@23-10.0.0.104:22-10.0.0.1:42598.service - OpenSSH per-connection server daemon (10.0.0.1:42598). Mar 17 17:52:17.627628 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 42598 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:17.629580 sshd-session[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:17.634342 systemd-logind[1479]: New session 24 of user core. Mar 17 17:52:17.642166 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:52:17.752526 sshd[6071]: Connection closed by 10.0.0.1 port 42598 Mar 17 17:52:17.753046 sshd-session[6069]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:17.757657 systemd[1]: sshd@23-10.0.0.104:22-10.0.0.1:42598.service: Deactivated successfully. Mar 17 17:52:17.760037 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:52:17.760929 systemd-logind[1479]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:52:17.762200 systemd-logind[1479]: Removed session 24. Mar 17 17:52:22.766129 systemd[1]: Started sshd@24-10.0.0.104:22-10.0.0.1:42610.service - OpenSSH per-connection server daemon (10.0.0.1:42610). Mar 17 17:52:22.812713 sshd[6083]: Accepted publickey for core from 10.0.0.1 port 42610 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:22.814482 sshd-session[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:22.818578 systemd-logind[1479]: New session 25 of user core. Mar 17 17:52:22.832158 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:52:22.947463 sshd[6085]: Connection closed by 10.0.0.1 port 42610 Mar 17 17:52:22.947832 sshd-session[6083]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:22.952066 systemd[1]: sshd@24-10.0.0.104:22-10.0.0.1:42610.service: Deactivated successfully. Mar 17 17:52:22.954693 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:52:22.955544 systemd-logind[1479]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:52:22.956549 systemd-logind[1479]: Removed session 25. Mar 17 17:52:27.959976 systemd[1]: Started sshd@25-10.0.0.104:22-10.0.0.1:40370.service - OpenSSH per-connection server daemon (10.0.0.1:40370). Mar 17 17:52:28.022207 sshd[6097]: Accepted publickey for core from 10.0.0.1 port 40370 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:28.023985 sshd-session[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:28.028709 systemd-logind[1479]: New session 26 of user core. Mar 17 17:52:28.034144 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:52:28.170627 sshd[6099]: Connection closed by 10.0.0.1 port 40370 Mar 17 17:52:28.172246 sshd-session[6097]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:28.175293 systemd[1]: sshd@25-10.0.0.104:22-10.0.0.1:40370.service: Deactivated successfully. Mar 17 17:52:28.177364 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:52:28.178878 systemd-logind[1479]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:52:28.179858 systemd-logind[1479]: Removed session 26.