May 16 10:03:35.843348 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 08:35:41 -00 2025 May 16 10:03:35.843373 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=78d6d96a0aa8b11f9a59fc9d3462ab2980bfe0b010418a201557f96f29fa4b43 May 16 10:03:35.843382 kernel: BIOS-provided physical RAM map: May 16 10:03:35.843389 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 16 10:03:35.843395 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 16 10:03:35.843401 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 16 10:03:35.843409 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 16 10:03:35.843417 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 16 10:03:35.843424 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 16 10:03:35.843430 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 16 10:03:35.843437 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 10:03:35.843443 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 16 10:03:35.843449 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 10:03:35.843456 kernel: NX (Execute Disable) protection: active May 16 10:03:35.843466 kernel: APIC: Static calls initialized May 16 10:03:35.843473 kernel: SMBIOS 2.8 present. May 16 10:03:35.843480 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 16 10:03:35.843487 kernel: DMI: Memory slots populated: 1/1 May 16 10:03:35.843494 kernel: Hypervisor detected: KVM May 16 10:03:35.843501 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 10:03:35.843508 kernel: kvm-clock: using sched offset of 3285879098 cycles May 16 10:03:35.843530 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 10:03:35.843537 kernel: tsc: Detected 2794.748 MHz processor May 16 10:03:35.843545 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 10:03:35.843555 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 10:03:35.843562 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 16 10:03:35.843569 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 16 10:03:35.843576 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 10:03:35.843585 kernel: Using GB pages for direct mapping May 16 10:03:35.843594 kernel: ACPI: Early table checksum verification disabled May 16 10:03:35.843603 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 16 10:03:35.843612 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 10:03:35.843624 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 10:03:35.843632 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 10:03:35.843639 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 16 10:03:35.843646 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 10:03:35.843653 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 10:03:35.843660 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 10:03:35.843667 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 10:03:35.843674 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 16 10:03:35.843686 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 16 10:03:35.843694 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 16 10:03:35.843701 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 16 10:03:35.843708 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 16 10:03:35.843716 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 16 10:03:35.843723 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 16 10:03:35.843732 kernel: No NUMA configuration found May 16 10:03:35.843740 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 16 10:03:35.843747 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 16 10:03:35.843755 kernel: Zone ranges: May 16 10:03:35.843762 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 10:03:35.843769 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 16 10:03:35.843776 kernel: Normal empty May 16 10:03:35.843784 kernel: Device empty May 16 10:03:35.843791 kernel: Movable zone start for each node May 16 10:03:35.843798 kernel: Early memory node ranges May 16 10:03:35.843807 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 16 10:03:35.843815 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 16 10:03:35.843825 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 16 10:03:35.843835 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 10:03:35.843844 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 16 10:03:35.843854 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 16 10:03:35.843862 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 10:03:35.843871 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 10:03:35.843880 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 10:03:35.843893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 10:03:35.843901 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 10:03:35.843908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 10:03:35.843915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 10:03:35.843923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 10:03:35.843930 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 10:03:35.843937 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 10:03:35.843944 kernel: TSC deadline timer available May 16 10:03:35.843952 kernel: CPU topo: Max. logical packages: 1 May 16 10:03:35.843961 kernel: CPU topo: Max. logical dies: 1 May 16 10:03:35.843968 kernel: CPU topo: Max. dies per package: 1 May 16 10:03:35.843975 kernel: CPU topo: Max. threads per core: 1 May 16 10:03:35.843983 kernel: CPU topo: Num. cores per package: 4 May 16 10:03:35.843990 kernel: CPU topo: Num. threads per package: 4 May 16 10:03:35.843997 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 16 10:03:35.844004 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 10:03:35.844011 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 10:03:35.844019 kernel: kvm-guest: setup PV sched yield May 16 10:03:35.844026 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 16 10:03:35.844035 kernel: Booting paravirtualized kernel on KVM May 16 10:03:35.844043 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 10:03:35.844050 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 10:03:35.844058 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 16 10:03:35.844065 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 16 10:03:35.844072 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 10:03:35.844080 kernel: kvm-guest: PV spinlocks enabled May 16 10:03:35.844087 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 10:03:35.844095 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=78d6d96a0aa8b11f9a59fc9d3462ab2980bfe0b010418a201557f96f29fa4b43 May 16 10:03:35.844105 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 10:03:35.844113 kernel: random: crng init done May 16 10:03:35.844120 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 10:03:35.844127 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 10:03:35.844135 kernel: Fallback order for Node 0: 0 May 16 10:03:35.844142 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 16 10:03:35.844149 kernel: Policy zone: DMA32 May 16 10:03:35.844157 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 10:03:35.844166 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 10:03:35.844173 kernel: ftrace: allocating 40065 entries in 157 pages May 16 10:03:35.844181 kernel: ftrace: allocated 157 pages with 5 groups May 16 10:03:35.844188 kernel: Dynamic Preempt: voluntary May 16 10:03:35.844195 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 10:03:35.844203 kernel: rcu: RCU event tracing is enabled. May 16 10:03:35.844211 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 10:03:35.844218 kernel: Trampoline variant of Tasks RCU enabled. May 16 10:03:35.844226 kernel: Rude variant of Tasks RCU enabled. May 16 10:03:35.844235 kernel: Tracing variant of Tasks RCU enabled. May 16 10:03:35.844243 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 10:03:35.844250 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 10:03:35.844257 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 10:03:35.844265 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 10:03:35.844272 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 10:03:35.844280 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 10:03:35.844287 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 10:03:35.844303 kernel: Console: colour VGA+ 80x25 May 16 10:03:35.844310 kernel: printk: legacy console [ttyS0] enabled May 16 10:03:35.844318 kernel: ACPI: Core revision 20240827 May 16 10:03:35.844336 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 10:03:35.844346 kernel: APIC: Switch to symmetric I/O mode setup May 16 10:03:35.844355 kernel: x2apic enabled May 16 10:03:35.844366 kernel: APIC: Switched APIC routing to: physical x2apic May 16 10:03:35.844374 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 10:03:35.844385 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 10:03:35.844397 kernel: kvm-guest: setup PV IPIs May 16 10:03:35.844410 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 10:03:35.844424 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 10:03:35.844437 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 10:03:35.844445 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 10:03:35.844453 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 10:03:35.844460 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 10:03:35.844468 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 10:03:35.844475 kernel: Spectre V2 : Mitigation: Retpolines May 16 10:03:35.844485 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 16 10:03:35.844493 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 16 10:03:35.844500 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 10:03:35.844508 kernel: RETBleed: Mitigation: untrained return thunk May 16 10:03:35.844565 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 10:03:35.844575 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 10:03:35.844582 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 10:03:35.844591 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 10:03:35.844602 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 10:03:35.844609 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 10:03:35.844617 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 10:03:35.844625 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 10:03:35.844632 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 10:03:35.844640 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 10:03:35.844648 kernel: Freeing SMP alternatives memory: 32K May 16 10:03:35.844656 kernel: pid_max: default: 32768 minimum: 301 May 16 10:03:35.844663 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 10:03:35.844673 kernel: landlock: Up and running. May 16 10:03:35.844680 kernel: SELinux: Initializing. May 16 10:03:35.844688 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 10:03:35.844699 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 10:03:35.844709 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 10:03:35.844717 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 10:03:35.844733 kernel: ... version: 0 May 16 10:03:35.844742 kernel: ... bit width: 48 May 16 10:03:35.844756 kernel: ... generic registers: 6 May 16 10:03:35.844767 kernel: ... value mask: 0000ffffffffffff May 16 10:03:35.844775 kernel: ... max period: 00007fffffffffff May 16 10:03:35.844782 kernel: ... fixed-purpose events: 0 May 16 10:03:35.844790 kernel: ... event mask: 000000000000003f May 16 10:03:35.844797 kernel: signal: max sigframe size: 1776 May 16 10:03:35.844805 kernel: rcu: Hierarchical SRCU implementation. May 16 10:03:35.844814 kernel: rcu: Max phase no-delay instances is 400. May 16 10:03:35.844824 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 10:03:35.844834 kernel: smp: Bringing up secondary CPUs ... May 16 10:03:35.844847 kernel: smpboot: x86: Booting SMP configuration: May 16 10:03:35.844856 kernel: .... node #0, CPUs: #1 #2 #3 May 16 10:03:35.844864 kernel: smp: Brought up 1 node, 4 CPUs May 16 10:03:35.844871 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 10:03:35.844879 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54424K init, 2536K bss, 136904K reserved, 0K cma-reserved) May 16 10:03:35.844887 kernel: devtmpfs: initialized May 16 10:03:35.844895 kernel: x86/mm: Memory block size: 128MB May 16 10:03:35.844902 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 10:03:35.844910 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 10:03:35.844920 kernel: pinctrl core: initialized pinctrl subsystem May 16 10:03:35.844928 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 10:03:35.844935 kernel: audit: initializing netlink subsys (disabled) May 16 10:03:35.844943 kernel: audit: type=2000 audit(1747389812.707:1): state=initialized audit_enabled=0 res=1 May 16 10:03:35.844950 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 10:03:35.844958 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 10:03:35.844965 kernel: cpuidle: using governor menu May 16 10:03:35.844973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 10:03:35.844981 kernel: dca service started, version 1.12.1 May 16 10:03:35.844990 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 16 10:03:35.844998 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 16 10:03:35.845006 kernel: PCI: Using configuration type 1 for base access May 16 10:03:35.845014 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 10:03:35.845021 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 10:03:35.845029 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 10:03:35.845037 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 10:03:35.845044 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 10:03:35.845052 kernel: ACPI: Added _OSI(Module Device) May 16 10:03:35.845061 kernel: ACPI: Added _OSI(Processor Device) May 16 10:03:35.845069 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 10:03:35.845077 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 10:03:35.845084 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 10:03:35.845092 kernel: ACPI: Interpreter enabled May 16 10:03:35.845099 kernel: ACPI: PM: (supports S0 S3 S5) May 16 10:03:35.845107 kernel: ACPI: Using IOAPIC for interrupt routing May 16 10:03:35.845115 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 10:03:35.845122 kernel: PCI: Using E820 reservations for host bridge windows May 16 10:03:35.845132 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 10:03:35.845139 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 10:03:35.845319 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 10:03:35.845456 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 10:03:35.845595 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 10:03:35.845606 kernel: PCI host bridge to bus 0000:00 May 16 10:03:35.845729 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 10:03:35.845845 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 10:03:35.845962 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 10:03:35.846081 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 16 10:03:35.846212 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 10:03:35.846386 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 16 10:03:35.846603 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 10:03:35.846775 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 16 10:03:35.846938 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 16 10:03:35.847086 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 16 10:03:35.847231 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 16 10:03:35.847386 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 16 10:03:35.847547 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 10:03:35.847734 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 10:03:35.847893 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 16 10:03:35.848042 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 16 10:03:35.848184 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 16 10:03:35.848345 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 16 10:03:35.848490 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 16 10:03:35.848677 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 16 10:03:35.848826 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 16 10:03:35.848992 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 16 10:03:35.849144 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 16 10:03:35.849294 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 16 10:03:35.849455 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 16 10:03:35.849633 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 16 10:03:35.849793 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 16 10:03:35.849954 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 10:03:35.850133 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 16 10:03:35.850298 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 16 10:03:35.850460 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 16 10:03:35.850673 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 16 10:03:35.850825 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 16 10:03:35.850840 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 10:03:35.850857 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 10:03:35.850867 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 10:03:35.850878 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 10:03:35.850889 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 10:03:35.850899 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 10:03:35.850910 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 10:03:35.850920 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 10:03:35.850930 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 10:03:35.850941 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 10:03:35.850955 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 10:03:35.850966 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 10:03:35.850976 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 10:03:35.850988 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 10:03:35.850998 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 10:03:35.851009 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 10:03:35.851020 kernel: iommu: Default domain type: Translated May 16 10:03:35.851031 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 10:03:35.851042 kernel: PCI: Using ACPI for IRQ routing May 16 10:03:35.851053 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 10:03:35.851066 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 16 10:03:35.851077 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 16 10:03:35.851229 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 10:03:35.851385 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 10:03:35.851548 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 10:03:35.851564 kernel: vgaarb: loaded May 16 10:03:35.851576 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 10:03:35.851587 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 10:03:35.851602 kernel: clocksource: Switched to clocksource kvm-clock May 16 10:03:35.851613 kernel: VFS: Disk quotas dquot_6.6.0 May 16 10:03:35.851625 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 10:03:35.851636 kernel: pnp: PnP ACPI init May 16 10:03:35.851792 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 16 10:03:35.851809 kernel: pnp: PnP ACPI: found 6 devices May 16 10:03:35.851820 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 10:03:35.851831 kernel: NET: Registered PF_INET protocol family May 16 10:03:35.851845 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 10:03:35.851855 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 10:03:35.851867 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 10:03:35.851878 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 10:03:35.851889 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 10:03:35.851901 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 10:03:35.851912 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 10:03:35.851923 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 10:03:35.851934 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 10:03:35.851948 kernel: NET: Registered PF_XDP protocol family May 16 10:03:35.852087 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 10:03:35.852219 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 10:03:35.852365 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 10:03:35.852499 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 16 10:03:35.852655 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 16 10:03:35.852787 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 16 10:03:35.852803 kernel: PCI: CLS 0 bytes, default 64 May 16 10:03:35.852819 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 10:03:35.852829 kernel: Initialise system trusted keyrings May 16 10:03:35.852841 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 10:03:35.852852 kernel: Key type asymmetric registered May 16 10:03:35.852863 kernel: Asymmetric key parser 'x509' registered May 16 10:03:35.852874 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 10:03:35.852885 kernel: io scheduler mq-deadline registered May 16 10:03:35.852895 kernel: io scheduler kyber registered May 16 10:03:35.852905 kernel: io scheduler bfq registered May 16 10:03:35.852920 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 10:03:35.852933 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 10:03:35.852944 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 10:03:35.852956 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 10:03:35.852967 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 10:03:35.852978 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 10:03:35.852990 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 10:03:35.853001 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 10:03:35.853012 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 10:03:35.853173 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 10:03:35.853190 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 10:03:35.853323 kernel: rtc_cmos 00:04: registered as rtc0 May 16 10:03:35.853472 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T10:03:35 UTC (1747389815) May 16 10:03:35.853651 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 16 10:03:35.853667 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 10:03:35.853678 kernel: NET: Registered PF_INET6 protocol family May 16 10:03:35.853690 kernel: Segment Routing with IPv6 May 16 10:03:35.853705 kernel: In-situ OAM (IOAM) with IPv6 May 16 10:03:35.853716 kernel: NET: Registered PF_PACKET protocol family May 16 10:03:35.853727 kernel: Key type dns_resolver registered May 16 10:03:35.853738 kernel: IPI shorthand broadcast: enabled May 16 10:03:35.853750 kernel: sched_clock: Marking stable (3158002883, 112252177)->(3292333467, -22078407) May 16 10:03:35.853761 kernel: registered taskstats version 1 May 16 10:03:35.853772 kernel: Loading compiled-in X.509 certificates May 16 10:03:35.853783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: e880c759570be4f24e609bd9f22262b4f17bcd9e' May 16 10:03:35.853794 kernel: Demotion targets for Node 0: null May 16 10:03:35.853808 kernel: Key type .fscrypt registered May 16 10:03:35.853818 kernel: Key type fscrypt-provisioning registered May 16 10:03:35.853829 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 10:03:35.853840 kernel: ima: Allocated hash algorithm: sha1 May 16 10:03:35.853851 kernel: ima: No architecture policies found May 16 10:03:35.853862 kernel: clk: Disabling unused clocks May 16 10:03:35.853873 kernel: Warning: unable to open an initial console. May 16 10:03:35.853884 kernel: Freeing unused kernel image (initmem) memory: 54424K May 16 10:03:35.853895 kernel: Write protecting the kernel read-only data: 24576k May 16 10:03:35.853910 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 16 10:03:35.853921 kernel: Run /init as init process May 16 10:03:35.853932 kernel: with arguments: May 16 10:03:35.853943 kernel: /init May 16 10:03:35.853954 kernel: with environment: May 16 10:03:35.853964 kernel: HOME=/ May 16 10:03:35.853975 kernel: TERM=linux May 16 10:03:35.853986 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 10:03:35.853999 systemd[1]: Successfully made /usr/ read-only. May 16 10:03:35.854027 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 10:03:35.854042 systemd[1]: Detected virtualization kvm. May 16 10:03:35.854054 systemd[1]: Detected architecture x86-64. May 16 10:03:35.854066 systemd[1]: Running in initrd. May 16 10:03:35.854078 systemd[1]: No hostname configured, using default hostname. May 16 10:03:35.854093 systemd[1]: Hostname set to . May 16 10:03:35.854105 systemd[1]: Initializing machine ID from VM UUID. May 16 10:03:35.854117 systemd[1]: Queued start job for default target initrd.target. May 16 10:03:35.854130 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 10:03:35.854142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 10:03:35.854155 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 10:03:35.854167 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 10:03:35.854180 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 10:03:35.854196 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 10:03:35.854210 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 10:03:35.854223 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 10:03:35.854235 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 10:03:35.854247 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 10:03:35.854259 systemd[1]: Reached target paths.target - Path Units. May 16 10:03:35.854271 systemd[1]: Reached target slices.target - Slice Units. May 16 10:03:35.854286 systemd[1]: Reached target swap.target - Swaps. May 16 10:03:35.854298 systemd[1]: Reached target timers.target - Timer Units. May 16 10:03:35.854310 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 10:03:35.854323 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 10:03:35.854345 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 10:03:35.854357 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 10:03:35.854370 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 10:03:35.854382 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 10:03:35.854399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 10:03:35.854412 systemd[1]: Reached target sockets.target - Socket Units. May 16 10:03:35.854424 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 10:03:35.854436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 10:03:35.854451 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 10:03:35.854464 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 10:03:35.854479 systemd[1]: Starting systemd-fsck-usr.service... May 16 10:03:35.854491 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 10:03:35.854503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 10:03:35.854538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 10:03:35.854551 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 10:03:35.854568 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 10:03:35.854581 systemd[1]: Finished systemd-fsck-usr.service. May 16 10:03:35.854593 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 10:03:35.854633 systemd-journald[220]: Collecting audit messages is disabled. May 16 10:03:35.854664 systemd-journald[220]: Journal started May 16 10:03:35.854690 systemd-journald[220]: Runtime Journal (/run/log/journal/8cbdbb1f6cd449d99e6d553307b5b175) is 6M, max 48.6M, 42.5M free. May 16 10:03:35.844799 systemd-modules-load[222]: Inserted module 'overlay' May 16 10:03:35.882478 systemd[1]: Started systemd-journald.service - Journal Service. May 16 10:03:35.882502 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 10:03:35.882538 kernel: Bridge firewalling registered May 16 10:03:35.874724 systemd-modules-load[222]: Inserted module 'br_netfilter' May 16 10:03:35.885348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 10:03:35.888340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 10:03:35.891399 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 10:03:35.898859 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 10:03:35.902709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 10:03:35.905407 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 10:03:35.909296 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 10:03:35.919798 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 10:03:35.920010 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 10:03:35.921440 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 10:03:35.925036 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 10:03:35.927418 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 10:03:35.931785 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 10:03:35.950168 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 10:03:35.966610 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=78d6d96a0aa8b11f9a59fc9d3462ab2980bfe0b010418a201557f96f29fa4b43 May 16 10:03:35.985624 systemd-resolved[262]: Positive Trust Anchors: May 16 10:03:35.985639 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 10:03:35.985670 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 10:03:35.988124 systemd-resolved[262]: Defaulting to hostname 'linux'. May 16 10:03:35.989151 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 10:03:35.994584 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 10:03:36.086561 kernel: SCSI subsystem initialized May 16 10:03:36.095553 kernel: Loading iSCSI transport class v2.0-870. May 16 10:03:36.106564 kernel: iscsi: registered transport (tcp) May 16 10:03:36.127813 kernel: iscsi: registered transport (qla4xxx) May 16 10:03:36.127891 kernel: QLogic iSCSI HBA Driver May 16 10:03:36.149348 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 10:03:36.167490 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 10:03:36.171104 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 10:03:36.229065 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 10:03:36.230792 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 10:03:36.294564 kernel: raid6: avx2x4 gen() 30521 MB/s May 16 10:03:36.311551 kernel: raid6: avx2x2 gen() 31395 MB/s May 16 10:03:36.328647 kernel: raid6: avx2x1 gen() 25908 MB/s May 16 10:03:36.328693 kernel: raid6: using algorithm avx2x2 gen() 31395 MB/s May 16 10:03:36.346665 kernel: raid6: .... xor() 19909 MB/s, rmw enabled May 16 10:03:36.346732 kernel: raid6: using avx2x2 recovery algorithm May 16 10:03:36.368556 kernel: xor: automatically using best checksumming function avx May 16 10:03:36.531564 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 10:03:36.539479 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 10:03:36.542183 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 10:03:36.584088 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 16 10:03:36.590391 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 10:03:36.591408 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 10:03:36.614003 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation May 16 10:03:36.643506 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 10:03:36.647176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 10:03:36.714597 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 10:03:36.719116 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 10:03:36.761536 kernel: cryptd: max_cpu_qlen set to 1000 May 16 10:03:36.763553 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 10:03:36.787396 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 10:03:36.788599 kernel: AES CTR mode by8 optimization enabled May 16 10:03:36.788611 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 10:03:36.788622 kernel: GPT:9289727 != 19775487 May 16 10:03:36.788631 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 10:03:36.788641 kernel: GPT:9289727 != 19775487 May 16 10:03:36.788650 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 10:03:36.788660 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 10:03:36.788810 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 10:03:36.788936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 10:03:36.791472 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 10:03:36.794710 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 10:03:36.804904 kernel: libata version 3.00 loaded. May 16 10:03:36.811018 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 16 10:03:36.825645 kernel: ahci 0000:00:1f.2: version 3.0 May 16 10:03:36.841782 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 10:03:36.841797 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 16 10:03:36.841946 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 16 10:03:36.842080 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 10:03:36.842208 kernel: scsi host0: ahci May 16 10:03:36.842362 kernel: scsi host1: ahci May 16 10:03:36.842503 kernel: scsi host2: ahci May 16 10:03:36.842656 kernel: scsi host3: ahci May 16 10:03:36.842792 kernel: scsi host4: ahci May 16 10:03:36.842924 kernel: scsi host5: ahci May 16 10:03:36.843055 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 16 10:03:36.843066 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 16 10:03:36.843076 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 16 10:03:36.843085 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 16 10:03:36.843095 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 16 10:03:36.843105 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 16 10:03:36.843616 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 10:03:36.870169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 10:03:36.887615 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 10:03:36.903995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 10:03:36.912763 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 10:03:36.915991 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 10:03:36.919375 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 10:03:36.953427 disk-uuid[632]: Primary Header is updated. May 16 10:03:36.953427 disk-uuid[632]: Secondary Entries is updated. May 16 10:03:36.953427 disk-uuid[632]: Secondary Header is updated. May 16 10:03:36.956957 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 10:03:37.153722 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 10:03:37.153797 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 10:03:37.153808 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 10:03:37.155555 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 10:03:37.155631 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 10:03:37.156551 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 10:03:37.158046 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 10:03:37.158068 kernel: ata3.00: applying bridge limits May 16 10:03:37.159542 kernel: ata3.00: configured for UDMA/100 May 16 10:03:37.161565 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 10:03:37.198563 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 10:03:37.219837 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 10:03:37.219868 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 10:03:37.628115 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 10:03:37.630385 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 10:03:37.638429 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 10:03:37.644845 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 10:03:37.656732 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 10:03:37.723020 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 10:03:37.967467 disk-uuid[633]: The operation has completed successfully. May 16 10:03:37.968769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 10:03:38.010912 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 10:03:38.011100 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 10:03:38.060366 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 10:03:38.084931 sh[664]: Success May 16 10:03:38.105370 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 10:03:38.105407 kernel: device-mapper: uevent: version 1.0.3 May 16 10:03:38.105426 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 10:03:38.114559 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 16 10:03:38.145041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 10:03:38.150030 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 10:03:38.165309 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 10:03:38.173092 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 10:03:38.173119 kernel: BTRFS: device fsid d5f0f383-318e-4c89-85fb-1cef494470bf devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (676) May 16 10:03:38.174398 kernel: BTRFS info (device dm-0): first mount of filesystem d5f0f383-318e-4c89-85fb-1cef494470bf May 16 10:03:38.175303 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 10:03:38.175321 kernel: BTRFS info (device dm-0): using free-space-tree May 16 10:03:38.180332 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 10:03:38.182795 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 10:03:38.185125 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 10:03:38.188149 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 10:03:38.191349 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 10:03:38.215574 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (709) May 16 10:03:38.217599 kernel: BTRFS info (device vda6): first mount of filesystem cca35438-8ca2-40ed-8885-c4a914979968 May 16 10:03:38.217622 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 10:03:38.217633 kernel: BTRFS info (device vda6): using free-space-tree May 16 10:03:38.224536 kernel: BTRFS info (device vda6): last unmount of filesystem cca35438-8ca2-40ed-8885-c4a914979968 May 16 10:03:38.225151 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 10:03:38.228730 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 10:03:38.302470 ignition[752]: Ignition 2.21.0 May 16 10:03:38.302483 ignition[752]: Stage: fetch-offline May 16 10:03:38.302510 ignition[752]: no configs at "/usr/lib/ignition/base.d" May 16 10:03:38.302534 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 10:03:38.302614 ignition[752]: parsed url from cmdline: "" May 16 10:03:38.302618 ignition[752]: no config URL provided May 16 10:03:38.302623 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" May 16 10:03:38.302632 ignition[752]: no config at "/usr/lib/ignition/user.ign" May 16 10:03:38.302653 ignition[752]: op(1): [started] loading QEMU firmware config module May 16 10:03:38.302657 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 10:03:38.313351 ignition[752]: op(1): [finished] loading QEMU firmware config module May 16 10:03:38.329456 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 10:03:38.335220 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 10:03:38.357159 ignition[752]: parsing config with SHA512: b23f07d59c34ca61f41d06c19a2842dc1079fa6dfcfe17cd1514a191bb7b2e77e077a8e1632c815592dd0b75b9dd8ac3bab3fc2ba6e1b6e11b9de6edf69a8146 May 16 10:03:38.361337 unknown[752]: fetched base config from "system" May 16 10:03:38.361351 unknown[752]: fetched user config from "qemu" May 16 10:03:38.361811 ignition[752]: fetch-offline: fetch-offline passed May 16 10:03:38.361881 ignition[752]: Ignition finished successfully May 16 10:03:38.365295 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 10:03:38.376777 systemd-networkd[853]: lo: Link UP May 16 10:03:38.376787 systemd-networkd[853]: lo: Gained carrier May 16 10:03:38.378291 systemd-networkd[853]: Enumeration completed May 16 10:03:38.378634 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 10:03:38.378638 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 10:03:38.379477 systemd-networkd[853]: eth0: Link UP May 16 10:03:38.379480 systemd-networkd[853]: eth0: Gained carrier May 16 10:03:38.379488 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 10:03:38.379859 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 10:03:38.380924 systemd[1]: Reached target network.target - Network. May 16 10:03:38.381223 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 10:03:38.382614 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 10:03:38.396642 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 10:03:38.417217 ignition[857]: Ignition 2.21.0 May 16 10:03:38.417247 ignition[857]: Stage: kargs May 16 10:03:38.417371 ignition[857]: no configs at "/usr/lib/ignition/base.d" May 16 10:03:38.417381 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 10:03:38.422295 ignition[857]: kargs: kargs passed May 16 10:03:38.422347 ignition[857]: Ignition finished successfully May 16 10:03:38.426937 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 10:03:38.428431 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 10:03:38.457094 ignition[867]: Ignition 2.21.0 May 16 10:03:38.457107 ignition[867]: Stage: disks May 16 10:03:38.457227 ignition[867]: no configs at "/usr/lib/ignition/base.d" May 16 10:03:38.457246 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 10:03:38.457873 ignition[867]: disks: disks passed May 16 10:03:38.457910 ignition[867]: Ignition finished successfully May 16 10:03:38.462071 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 10:03:38.462638 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 10:03:38.462977 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 10:03:38.463315 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 10:03:38.463993 systemd[1]: Reached target sysinit.target - System Initialization. May 16 10:03:38.464321 systemd[1]: Reached target basic.target - Basic System. May 16 10:03:38.465697 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 10:03:38.494857 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 10:03:38.503945 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 10:03:38.505332 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 10:03:38.610555 kernel: EXT4-fs (vda9): mounted filesystem 3ba80a26-00c9-40d4-aa7e-4e9151774bbb r/w with ordered data mode. Quota mode: none. May 16 10:03:38.611167 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 10:03:38.612263 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 10:03:38.615154 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 10:03:38.616626 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 10:03:38.617425 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 10:03:38.617461 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 10:03:38.617481 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 10:03:38.638023 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 10:03:38.639635 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 10:03:38.644548 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (885) May 16 10:03:38.644607 kernel: BTRFS info (device vda6): first mount of filesystem cca35438-8ca2-40ed-8885-c4a914979968 May 16 10:03:38.646607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 10:03:38.647556 kernel: BTRFS info (device vda6): using free-space-tree May 16 10:03:38.651202 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 10:03:38.675565 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory May 16 10:03:38.679675 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory May 16 10:03:38.683604 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory May 16 10:03:38.688275 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory May 16 10:03:38.777903 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 10:03:38.780499 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 10:03:38.781608 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 10:03:38.804538 kernel: BTRFS info (device vda6): last unmount of filesystem cca35438-8ca2-40ed-8885-c4a914979968 May 16 10:03:38.816719 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 10:03:38.830014 ignition[999]: INFO : Ignition 2.21.0 May 16 10:03:38.830014 ignition[999]: INFO : Stage: mount May 16 10:03:38.831822 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 10:03:38.831822 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 10:03:38.835074 ignition[999]: INFO : mount: mount passed May 16 10:03:38.835859 ignition[999]: INFO : Ignition finished successfully May 16 10:03:38.838356 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 10:03:38.839777 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 10:03:39.172408 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 10:03:39.174031 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 10:03:39.202335 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1011) May 16 10:03:39.202376 kernel: BTRFS info (device vda6): first mount of filesystem cca35438-8ca2-40ed-8885-c4a914979968 May 16 10:03:39.202387 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 10:03:39.203933 kernel: BTRFS info (device vda6): using free-space-tree May 16 10:03:39.207407 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 10:03:39.242858 ignition[1028]: INFO : Ignition 2.21.0 May 16 10:03:39.242858 ignition[1028]: INFO : Stage: files May 16 10:03:39.244671 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 10:03:39.244671 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 10:03:39.244671 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping May 16 10:03:39.248221 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 10:03:39.248221 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 10:03:39.248221 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 10:03:39.248221 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 10:03:39.248221 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 10:03:39.247815 unknown[1028]: wrote ssh authorized keys file for user: core May 16 10:03:39.256074 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 10:03:39.256074 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 16 10:03:39.288089 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 10:03:39.431328 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 10:03:39.431328 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 10:03:39.436082 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 10:03:39.436082 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 10:03:39.436082 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 10:03:39.436082 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 10:03:39.436082 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 10:03:39.436082 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 10:03:39.436082 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 10:03:39.451218 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 10:03:39.451218 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 10:03:39.451218 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 16 10:03:39.451218 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 16 10:03:39.451218 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 16 10:03:39.451218 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 16 10:03:39.850767 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 10:03:39.961648 systemd-networkd[853]: eth0: Gained IPv6LL May 16 10:03:42.171851 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 16 10:03:42.171851 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 10:03:42.175588 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 10:03:42.183121 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 10:03:42.183121 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 10:03:42.183121 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 16 10:03:42.187885 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 10:03:42.187885 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 10:03:42.187885 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 16 10:03:42.187885 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 16 10:03:42.204830 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 10:03:42.209546 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 10:03:42.211199 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 16 10:03:42.211199 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 16 10:03:42.211199 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 16 10:03:42.211199 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 10:03:42.211199 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 10:03:42.211199 ignition[1028]: INFO : files: files passed May 16 10:03:42.211199 ignition[1028]: INFO : Ignition finished successfully May 16 10:03:42.220436 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 10:03:42.224597 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 10:03:42.227426 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 10:03:42.239663 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 10:03:42.239789 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 10:03:42.244621 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory May 16 10:03:42.248609 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 10:03:42.248609 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 10:03:42.252731 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 10:03:42.251285 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 10:03:42.253753 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 10:03:42.258667 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 10:03:42.329233 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 10:03:42.329367 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 10:03:42.330259 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 10:03:42.334883 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 10:03:42.335346 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 10:03:42.336995 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 10:03:42.365955 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 10:03:42.367789 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 10:03:42.392307 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 10:03:42.392896 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 10:03:42.395461 systemd[1]: Stopped target timers.target - Timer Units. May 16 10:03:42.395847 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 10:03:42.395978 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 10:03:42.401929 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 10:03:42.402563 systemd[1]: Stopped target basic.target - Basic System. May 16 10:03:42.405896 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 10:03:42.408008 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 10:03:42.413047 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 10:03:42.414710 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 10:03:42.424753 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 10:03:42.425166 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 10:03:42.443697 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 10:03:42.453350 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 10:03:42.462945 systemd[1]: Stopped target swap.target - Swaps. May 16 10:03:42.480363 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 10:03:42.484145 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 10:03:42.492724 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 10:03:42.496136 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 10:03:42.504473 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 10:03:42.510359 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 10:03:42.537410 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 10:03:42.537613 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 10:03:42.546035 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 10:03:42.546252 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 10:03:42.548975 systemd[1]: Stopped target paths.target - Path Units. May 16 10:03:42.550504 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 10:03:42.555109 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 10:03:42.557642 systemd[1]: Stopped target slices.target - Slice Units. May 16 10:03:42.563385 systemd[1]: Stopped target sockets.target - Socket Units. May 16 10:03:42.563806 systemd[1]: iscsid.socket: Deactivated successfully. May 16 10:03:42.563946 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 10:03:42.567796 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 10:03:42.567906 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 10:03:42.581182 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 10:03:42.582035 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 10:03:42.585710 systemd[1]: ignition-files.service: Deactivated successfully. May 16 10:03:42.587166 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 10:03:42.591760 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 10:03:42.602758 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 10:03:42.611380 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 10:03:42.611654 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 10:03:42.613494 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 10:03:42.613684 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 10:03:42.622213 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 10:03:42.624112 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 10:03:42.648195 ignition[1084]: INFO : Ignition 2.21.0 May 16 10:03:42.649576 ignition[1084]: INFO : Stage: umount May 16 10:03:42.649576 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 10:03:42.649576 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 10:03:42.653440 ignition[1084]: INFO : umount: umount passed May 16 10:03:42.653440 ignition[1084]: INFO : Ignition finished successfully May 16 10:03:42.654474 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 10:03:42.654653 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 10:03:42.657352 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 10:03:42.657919 systemd[1]: Stopped target network.target - Network. May 16 10:03:42.661325 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 10:03:42.661401 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 10:03:42.662396 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 10:03:42.662451 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 10:03:42.663034 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 10:03:42.663105 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 10:03:42.663480 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 10:03:42.663545 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 10:03:42.664262 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 10:03:42.671747 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 10:03:42.699922 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 10:03:42.700177 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 10:03:42.710823 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 10:03:42.711156 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 10:03:42.712430 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 10:03:42.717627 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 10:03:42.718444 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 10:03:42.724213 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 10:03:42.724271 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 10:03:42.726042 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 10:03:42.736928 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 10:03:42.737043 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 10:03:42.744864 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 10:03:42.744973 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 10:03:42.755504 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 10:03:42.755627 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 10:03:42.760201 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 10:03:42.760291 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 10:03:42.765739 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 10:03:42.767834 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 10:03:42.767932 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 10:03:42.795756 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 10:03:42.795928 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 10:03:42.798323 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 10:03:42.798595 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 10:03:42.806845 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 10:03:42.806957 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 10:03:42.814427 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 10:03:42.814508 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 10:03:42.814878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 10:03:42.814950 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 10:03:42.826060 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 10:03:42.826174 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 10:03:42.831476 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 10:03:42.831605 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 10:03:42.835018 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 10:03:42.849628 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 10:03:42.849751 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 10:03:42.862951 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 10:03:42.863055 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 10:03:42.881368 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 10:03:42.881473 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 10:03:42.890860 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 10:03:42.890961 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 10:03:42.891545 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 10:03:42.891610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 10:03:42.900560 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 16 10:03:42.900643 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 16 10:03:42.900709 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 10:03:42.900777 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 10:03:42.901373 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 10:03:42.901535 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 10:03:42.956303 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 10:03:42.956498 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 10:03:42.957822 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 10:03:42.958827 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 10:03:42.958926 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 10:03:42.960501 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 10:03:43.002955 systemd[1]: Switching root. May 16 10:03:43.047470 systemd-journald[220]: Journal stopped May 16 10:03:44.472979 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 16 10:03:44.473051 kernel: SELinux: policy capability network_peer_controls=1 May 16 10:03:44.473074 kernel: SELinux: policy capability open_perms=1 May 16 10:03:44.473089 kernel: SELinux: policy capability extended_socket_class=1 May 16 10:03:44.473100 kernel: SELinux: policy capability always_check_network=0 May 16 10:03:44.473111 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 10:03:44.473122 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 10:03:44.473133 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 10:03:44.473144 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 10:03:44.473155 kernel: SELinux: policy capability userspace_initial_context=0 May 16 10:03:44.473168 kernel: audit: type=1403 audit(1747389823.545:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 10:03:44.473180 systemd[1]: Successfully loaded SELinux policy in 62.341ms. May 16 10:03:44.473199 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 19.565ms. May 16 10:03:44.473216 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 10:03:44.473229 systemd[1]: Detected virtualization kvm. May 16 10:03:44.473241 systemd[1]: Detected architecture x86-64. May 16 10:03:44.473257 systemd[1]: Detected first boot. May 16 10:03:44.473269 systemd[1]: Initializing machine ID from VM UUID. May 16 10:03:44.473280 zram_generator::config[1129]: No configuration found. May 16 10:03:44.473365 kernel: Guest personality initialized and is inactive May 16 10:03:44.473378 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 10:03:44.473389 kernel: Initialized host personality May 16 10:03:44.473400 kernel: NET: Registered PF_VSOCK protocol family May 16 10:03:44.473412 systemd[1]: Populated /etc with preset unit settings. May 16 10:03:44.473425 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 10:03:44.473437 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 10:03:44.473448 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 10:03:44.473463 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 10:03:44.473475 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 10:03:44.473487 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 10:03:44.473499 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 10:03:44.473522 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 10:03:44.473535 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 10:03:44.473547 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 10:03:44.473559 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 10:03:44.473573 systemd[1]: Created slice user.slice - User and Session Slice. May 16 10:03:44.473586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 10:03:44.473597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 10:03:44.473609 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 10:03:44.473621 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 10:03:44.473634 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 10:03:44.473650 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 10:03:44.473664 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 10:03:44.473677 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 10:03:44.473689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 10:03:44.473701 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 10:03:44.473713 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 10:03:44.473725 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 10:03:44.473737 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 10:03:44.473748 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 10:03:44.473760 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 10:03:44.473772 systemd[1]: Reached target slices.target - Slice Units. May 16 10:03:44.473785 systemd[1]: Reached target swap.target - Swaps. May 16 10:03:44.473797 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 10:03:44.473809 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 10:03:44.473821 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 10:03:44.473833 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 10:03:44.473845 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 10:03:44.473856 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 10:03:44.473868 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 10:03:44.473880 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 10:03:44.473898 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 10:03:44.473911 systemd[1]: Mounting media.mount - External Media Directory... May 16 10:03:44.473923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 10:03:44.473935 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 10:03:44.473946 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 10:03:44.473958 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 10:03:44.473971 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 10:03:44.473982 systemd[1]: Reached target machines.target - Containers. May 16 10:03:44.473994 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 10:03:44.474008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 10:03:44.474030 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 10:03:44.474042 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 10:03:44.474053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 10:03:44.474065 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 10:03:44.474077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 10:03:44.474088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 10:03:44.474837 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 10:03:44.474859 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 10:03:44.474875 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 10:03:44.474889 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 10:03:44.474903 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 10:03:44.474918 systemd[1]: Stopped systemd-fsck-usr.service. May 16 10:03:44.474935 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 10:03:44.474950 kernel: loop: module loaded May 16 10:03:44.474965 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 10:03:44.474977 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 10:03:44.474992 kernel: fuse: init (API version 7.41) May 16 10:03:44.475003 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 10:03:44.475026 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 10:03:44.475038 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 10:03:44.475050 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 10:03:44.475064 systemd[1]: verity-setup.service: Deactivated successfully. May 16 10:03:44.475076 systemd[1]: Stopped verity-setup.service. May 16 10:03:44.475088 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 10:03:44.475101 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 10:03:44.475115 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 10:03:44.475151 systemd-journald[1204]: Collecting audit messages is disabled. May 16 10:03:44.475173 systemd[1]: Mounted media.mount - External Media Directory. May 16 10:03:44.475185 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 10:03:44.475197 systemd-journald[1204]: Journal started May 16 10:03:44.475219 systemd-journald[1204]: Runtime Journal (/run/log/journal/8cbdbb1f6cd449d99e6d553307b5b175) is 6M, max 48.6M, 42.5M free. May 16 10:03:44.199828 systemd[1]: Queued start job for default target multi-user.target. May 16 10:03:44.218906 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 10:03:44.219405 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 10:03:44.478532 systemd[1]: Started systemd-journald.service - Journal Service. May 16 10:03:44.479844 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 10:03:44.481221 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 10:03:44.482645 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 10:03:44.484300 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 10:03:44.486005 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 10:03:44.486219 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 10:03:44.487864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 10:03:44.488086 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 10:03:44.489696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 10:03:44.489894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 10:03:44.491598 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 10:03:44.491807 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 10:03:44.493340 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 10:03:44.493553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 10:03:44.495134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 10:03:44.496738 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 10:03:44.498490 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 10:03:44.502669 kernel: ACPI: bus type drm_connector registered May 16 10:03:44.502976 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 10:03:44.503219 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 10:03:44.513593 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 10:03:44.516703 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 10:03:44.519265 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 10:03:44.521670 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 10:03:44.522979 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 10:03:44.523020 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 10:03:44.525180 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 10:03:44.529643 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 10:03:44.531044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 10:03:44.532397 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 10:03:44.535958 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 10:03:44.537384 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 10:03:44.538759 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 10:03:44.540142 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 10:03:44.541539 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 10:03:44.545678 systemd-journald[1204]: Time spent on flushing to /var/log/journal/8cbdbb1f6cd449d99e6d553307b5b175 is 25.430ms for 977 entries. May 16 10:03:44.545678 systemd-journald[1204]: System Journal (/var/log/journal/8cbdbb1f6cd449d99e6d553307b5b175) is 8M, max 195.6M, 187.6M free. May 16 10:03:44.578275 systemd-journald[1204]: Received client request to flush runtime journal. May 16 10:03:44.545639 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 10:03:44.560345 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 10:03:44.563890 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 10:03:44.566758 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 10:03:44.579552 kernel: loop0: detected capacity change from 0 to 113872 May 16 10:03:44.582694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 10:03:44.585803 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 10:03:44.587915 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 10:03:44.595595 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 10:03:44.599951 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 10:03:44.601977 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 10:03:44.610556 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 10:03:44.616217 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 16 10:03:44.616240 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 16 10:03:44.623923 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 10:03:44.628029 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 10:03:44.633544 kernel: loop1: detected capacity change from 0 to 146240 May 16 10:03:44.642744 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 10:03:44.670556 kernel: loop2: detected capacity change from 0 to 218376 May 16 10:03:44.676362 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 10:03:44.680653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 10:03:44.695114 kernel: loop3: detected capacity change from 0 to 113872 May 16 10:03:44.707583 kernel: loop4: detected capacity change from 0 to 146240 May 16 10:03:44.711247 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. May 16 10:03:44.711616 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. May 16 10:03:44.718269 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 10:03:44.725578 kernel: loop5: detected capacity change from 0 to 218376 May 16 10:03:44.732197 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 10:03:44.732862 (sd-merge)[1274]: Merged extensions into '/usr'. May 16 10:03:44.737921 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... May 16 10:03:44.737937 systemd[1]: Reloading... May 16 10:03:44.806555 zram_generator::config[1302]: No configuration found. May 16 10:03:44.900365 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 10:03:44.910990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 10:03:44.990566 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 10:03:44.990750 systemd[1]: Reloading finished in 252 ms. May 16 10:03:45.016672 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 10:03:45.018397 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 10:03:45.032938 systemd[1]: Starting ensure-sysext.service... May 16 10:03:45.034804 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 10:03:45.046328 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... May 16 10:03:45.046347 systemd[1]: Reloading... May 16 10:03:45.059250 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 10:03:45.059288 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 10:03:45.059583 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 10:03:45.059823 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 10:03:45.060739 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 10:03:45.061008 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. May 16 10:03:45.061082 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. May 16 10:03:45.064881 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. May 16 10:03:45.064959 systemd-tmpfiles[1340]: Skipping /boot May 16 10:03:45.077626 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. May 16 10:03:45.077640 systemd-tmpfiles[1340]: Skipping /boot May 16 10:03:45.095553 zram_generator::config[1364]: No configuration found. May 16 10:03:45.207029 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 10:03:45.289848 systemd[1]: Reloading finished in 243 ms. May 16 10:03:45.317029 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 10:03:45.346207 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 10:03:45.354930 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 10:03:45.357416 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 10:03:45.370586 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 10:03:45.373921 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 10:03:45.377696 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 10:03:45.380738 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 10:03:45.384875 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 10:03:45.385052 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 10:03:45.388788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 10:03:45.391852 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 10:03:45.395345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 10:03:45.396671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 10:03:45.396885 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 10:03:45.399562 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 10:03:45.400776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 10:03:45.402096 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 10:03:45.410424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 10:03:45.412255 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 10:03:45.414182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 10:03:45.414378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 10:03:45.416355 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 10:03:45.416575 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 10:03:45.427165 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 10:03:45.427674 augenrules[1439]: No rules May 16 10:03:45.427751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 10:03:45.428121 systemd-udevd[1413]: Using default interface naming scheme 'v255'. May 16 10:03:45.430654 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 10:03:45.435023 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 10:03:45.443633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 10:03:45.444808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 10:03:45.444996 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 10:03:45.446218 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 10:03:45.447335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 10:03:45.448674 systemd[1]: audit-rules.service: Deactivated successfully. May 16 10:03:45.448952 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 10:03:45.450676 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 10:03:45.452475 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 10:03:45.454854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 10:03:45.455155 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 10:03:45.457178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 10:03:45.457409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 10:03:45.459367 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 10:03:45.461893 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 10:03:45.463721 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 10:03:45.466824 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 10:03:45.482572 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 10:03:45.499385 systemd[1]: Finished ensure-sysext.service. May 16 10:03:45.507469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 10:03:45.509351 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 10:03:45.511120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 10:03:45.512203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 10:03:45.514493 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 10:03:45.522282 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 10:03:45.524367 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 10:03:45.525534 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 10:03:45.525563 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 10:03:45.528632 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 10:03:45.538535 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 10:03:45.540030 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 10:03:45.540056 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 10:03:45.540681 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 10:03:45.540888 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 10:03:45.542781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 10:03:45.543203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 10:03:45.544961 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 10:03:45.545342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 10:03:45.546947 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 10:03:45.551244 augenrules[1488]: /sbin/augenrules: No change May 16 10:03:45.554829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 10:03:45.555459 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 10:03:45.563729 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 10:03:45.564216 augenrules[1521]: No rules May 16 10:03:45.564458 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 10:03:45.567290 systemd[1]: audit-rules.service: Deactivated successfully. May 16 10:03:45.568120 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 10:03:45.591917 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 10:03:45.594843 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 10:03:45.602580 kernel: mousedev: PS/2 mouse device common for all mice May 16 10:03:45.620147 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 10:03:45.621620 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 16 10:03:45.632550 kernel: ACPI: button: Power Button [PWRF] May 16 10:03:45.662726 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 10:03:45.663010 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 10:03:45.724628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 10:03:45.737039 systemd-networkd[1496]: lo: Link UP May 16 10:03:45.737052 systemd-networkd[1496]: lo: Gained carrier May 16 10:03:45.738893 systemd-networkd[1496]: Enumeration completed May 16 10:03:45.739079 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 10:03:45.741882 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 10:03:45.741902 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 10:03:45.744202 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 10:03:45.745859 systemd-networkd[1496]: eth0: Link UP May 16 10:03:45.746008 systemd-networkd[1496]: eth0: Gained carrier May 16 10:03:45.746021 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 10:03:45.752820 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 10:03:45.761147 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 10:03:45.767644 systemd-resolved[1410]: Positive Trust Anchors: May 16 10:03:45.767657 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 10:03:45.767688 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 10:03:45.771289 systemd-resolved[1410]: Defaulting to hostname 'linux'. May 16 10:03:45.773860 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 10:03:45.775263 systemd[1]: Reached target network.target - Network. May 16 10:03:45.776587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 10:03:45.782560 kernel: kvm_amd: TSC scaling supported May 16 10:03:45.782593 kernel: kvm_amd: Nested Virtualization enabled May 16 10:03:45.782606 kernel: kvm_amd: Nested Paging enabled May 16 10:03:45.783571 kernel: kvm_amd: LBR virtualization supported May 16 10:03:45.784538 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 10:03:45.784605 kernel: kvm_amd: Virtual GIF supported May 16 10:03:45.790498 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 10:03:45.856145 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 10:03:45.857499 systemd-timesyncd[1505]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 10:03:45.859589 systemd-timesyncd[1505]: Initial clock synchronization to Fri 2025-05-16 10:03:46.241729 UTC. May 16 10:03:45.868544 kernel: EDAC MC: Ver: 3.0.0 May 16 10:03:45.893470 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 10:03:45.895925 systemd[1]: Reached target sysinit.target - System Initialization. May 16 10:03:45.897385 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 10:03:45.898808 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 10:03:45.900285 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 16 10:03:45.901708 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 10:03:45.903255 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 10:03:45.903287 systemd[1]: Reached target paths.target - Path Units. May 16 10:03:45.904394 systemd[1]: Reached target time-set.target - System Time Set. May 16 10:03:45.905832 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 10:03:45.907224 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 10:03:45.908746 systemd[1]: Reached target timers.target - Timer Units. May 16 10:03:45.910877 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 10:03:45.913914 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 10:03:45.917736 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 10:03:45.919387 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 10:03:45.920947 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 10:03:45.930267 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 10:03:45.932590 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 10:03:45.934893 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 10:03:45.937191 systemd[1]: Reached target sockets.target - Socket Units. May 16 10:03:45.938429 systemd[1]: Reached target basic.target - Basic System. May 16 10:03:45.939652 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 10:03:45.939688 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 10:03:45.941359 systemd[1]: Starting containerd.service - containerd container runtime... May 16 10:03:45.944041 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 10:03:45.946391 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 10:03:45.948087 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 10:03:45.958598 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 10:03:45.960100 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 10:03:45.961729 jq[1566]: false May 16 10:03:45.962157 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 16 10:03:45.965005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 10:03:45.968636 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 10:03:45.973652 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 10:03:45.976870 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing passwd entry cache May 16 10:03:45.976838 oslogin_cache_refresh[1568]: Refreshing passwd entry cache May 16 10:03:45.977700 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 10:03:45.982003 extend-filesystems[1567]: Found loop3 May 16 10:03:45.982321 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 10:03:45.982977 extend-filesystems[1567]: Found loop4 May 16 10:03:45.983953 extend-filesystems[1567]: Found loop5 May 16 10:03:45.983953 extend-filesystems[1567]: Found sr0 May 16 10:03:45.983953 extend-filesystems[1567]: Found vda May 16 10:03:45.983953 extend-filesystems[1567]: Found vda1 May 16 10:03:45.983953 extend-filesystems[1567]: Found vda2 May 16 10:03:45.983953 extend-filesystems[1567]: Found vda3 May 16 10:03:45.983953 extend-filesystems[1567]: Found usr May 16 10:03:45.983953 extend-filesystems[1567]: Found vda4 May 16 10:03:45.983953 extend-filesystems[1567]: Found vda6 May 16 10:03:45.983953 extend-filesystems[1567]: Found vda7 May 16 10:03:45.983953 extend-filesystems[1567]: Found vda9 May 16 10:03:45.983953 extend-filesystems[1567]: Checking size of /dev/vda9 May 16 10:03:45.986674 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 10:03:45.990333 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 10:03:45.993666 systemd[1]: Starting update-engine.service - Update Engine... May 16 10:03:45.997281 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 10:03:46.001172 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 10:03:46.002888 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 10:03:46.003380 extend-filesystems[1567]: Resized partition /dev/vda9 May 16 10:03:46.004591 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 10:03:46.005008 systemd[1]: motdgen.service: Deactivated successfully. May 16 10:03:46.005278 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 10:03:46.008479 extend-filesystems[1589]: resize2fs 1.47.2 (1-Jan-2025) May 16 10:03:46.013526 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting users, quitting May 16 10:03:46.013526 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 10:03:46.013526 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing group entry cache May 16 10:03:46.013264 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 10:03:46.013007 oslogin_cache_refresh[1568]: Failure getting users, quitting May 16 10:03:46.013527 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 10:03:46.013031 oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 10:03:46.013091 oslogin_cache_refresh[1568]: Refreshing group entry cache May 16 10:03:46.017611 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 10:03:46.021632 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting groups, quitting May 16 10:03:46.021632 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 10:03:46.021700 jq[1585]: true May 16 10:03:46.021024 oslogin_cache_refresh[1568]: Failure getting groups, quitting May 16 10:03:46.021042 oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 10:03:46.026514 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 16 10:03:46.026939 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 16 10:03:46.037763 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 10:03:46.102991 update_engine[1582]: I20250516 10:03:46.055981 1582 main.cc:92] Flatcar Update Engine starting May 16 10:03:46.103356 jq[1596]: true May 16 10:03:46.110088 tar[1588]: linux-amd64/LICENSE May 16 10:03:46.110541 tar[1588]: linux-amd64/helm May 16 10:03:46.125542 dbus-daemon[1564]: [system] SELinux support is enabled May 16 10:03:46.126384 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 10:03:46.133607 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 10:03:46.140283 update_engine[1582]: I20250516 10:03:46.138454 1582 update_check_scheduler.cc:74] Next update check in 11m20s May 16 10:03:46.133646 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 10:03:46.137747 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 10:03:46.137775 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 10:03:46.144631 systemd-logind[1576]: Watching system buttons on /dev/input/event2 (Power Button) May 16 10:03:46.221878 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 10:03:46.144752 systemd-logind[1576]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 10:03:46.145268 systemd[1]: Started update-engine.service - Update Engine. May 16 10:03:46.146152 systemd-logind[1576]: New seat seat0. May 16 10:03:46.155410 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 10:03:46.166298 systemd[1]: Started systemd-logind.service - User Login Management. May 16 10:03:46.225913 extend-filesystems[1589]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 10:03:46.225913 extend-filesystems[1589]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 10:03:46.225913 extend-filesystems[1589]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 10:03:46.233443 extend-filesystems[1567]: Resized filesystem in /dev/vda9 May 16 10:03:46.227301 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 10:03:46.234764 bash[1621]: Updated "/home/core/.ssh/authorized_keys" May 16 10:03:46.227713 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 10:03:46.240516 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 10:03:46.243409 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 10:03:46.248450 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 10:03:46.306847 containerd[1591]: time="2025-05-16T10:03:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 10:03:46.310529 containerd[1591]: time="2025-05-16T10:03:46.310449046Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 10:03:46.320176 containerd[1591]: time="2025-05-16T10:03:46.320131380Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.883µs" May 16 10:03:46.320176 containerd[1591]: time="2025-05-16T10:03:46.320155446Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 10:03:46.320176 containerd[1591]: time="2025-05-16T10:03:46.320176163Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 10:03:46.320395 containerd[1591]: time="2025-05-16T10:03:46.320360387Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 10:03:46.320395 containerd[1591]: time="2025-05-16T10:03:46.320382930Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 10:03:46.320469 containerd[1591]: time="2025-05-16T10:03:46.320412425Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 10:03:46.320528 containerd[1591]: time="2025-05-16T10:03:46.320497057Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 10:03:46.320528 containerd[1591]: time="2025-05-16T10:03:46.320513175Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 10:03:46.320829 containerd[1591]: time="2025-05-16T10:03:46.320789022Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 10:03:46.320829 containerd[1591]: time="2025-05-16T10:03:46.320809087Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 10:03:46.320829 containerd[1591]: time="2025-05-16T10:03:46.320823220Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 10:03:46.320930 containerd[1591]: time="2025-05-16T10:03:46.320834141Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 10:03:46.320981 containerd[1591]: time="2025-05-16T10:03:46.320956614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 10:03:46.321253 containerd[1591]: time="2025-05-16T10:03:46.321215505Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 10:03:46.321295 containerd[1591]: time="2025-05-16T10:03:46.321254449Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 10:03:46.321295 containerd[1591]: time="2025-05-16T10:03:46.321267773Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 10:03:46.321351 containerd[1591]: time="2025-05-16T10:03:46.321317165Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 10:03:46.322758 containerd[1591]: time="2025-05-16T10:03:46.322733439Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 10:03:46.323134 containerd[1591]: time="2025-05-16T10:03:46.323101310Z" level=info msg="metadata content store policy set" policy=shared May 16 10:03:46.331919 containerd[1591]: time="2025-05-16T10:03:46.331854092Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 10:03:46.331919 containerd[1591]: time="2025-05-16T10:03:46.331891587Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 10:03:46.331919 containerd[1591]: time="2025-05-16T10:03:46.331905006Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 10:03:46.331919 containerd[1591]: time="2025-05-16T10:03:46.331915601Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 10:03:46.332020 containerd[1591]: time="2025-05-16T10:03:46.331926510Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 10:03:46.332020 containerd[1591]: time="2025-05-16T10:03:46.331936496Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 10:03:46.332020 containerd[1591]: time="2025-05-16T10:03:46.331947164Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 10:03:46.332020 containerd[1591]: time="2025-05-16T10:03:46.331957223Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 10:03:46.332020 containerd[1591]: time="2025-05-16T10:03:46.331967261Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 10:03:46.332020 containerd[1591]: time="2025-05-16T10:03:46.331976523Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 10:03:46.332020 containerd[1591]: time="2025-05-16T10:03:46.331984219Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 10:03:46.332020 containerd[1591]: time="2025-05-16T10:03:46.331998572Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 10:03:46.332175 containerd[1591]: time="2025-05-16T10:03:46.332107133Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 10:03:46.332175 containerd[1591]: time="2025-05-16T10:03:46.332123251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 10:03:46.332175 containerd[1591]: time="2025-05-16T10:03:46.332135694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 10:03:46.332175 containerd[1591]: time="2025-05-16T10:03:46.332146025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 10:03:46.332175 containerd[1591]: time="2025-05-16T10:03:46.332171572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 10:03:46.332265 containerd[1591]: time="2025-05-16T10:03:46.332181096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 10:03:46.332265 containerd[1591]: time="2025-05-16T10:03:46.332195628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 10:03:46.332265 containerd[1591]: time="2025-05-16T10:03:46.332208532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 10:03:46.332265 containerd[1591]: time="2025-05-16T10:03:46.332218780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 10:03:46.332265 containerd[1591]: time="2025-05-16T10:03:46.332229396Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 10:03:46.332265 containerd[1591]: time="2025-05-16T10:03:46.332238364Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 10:03:46.332459 containerd[1591]: time="2025-05-16T10:03:46.332295746Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 10:03:46.332459 containerd[1591]: time="2025-05-16T10:03:46.332306761Z" level=info msg="Start snapshots syncer" May 16 10:03:46.332459 containerd[1591]: time="2025-05-16T10:03:46.332327078Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 10:03:46.332717 containerd[1591]: time="2025-05-16T10:03:46.332620000Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 10:03:46.332822 containerd[1591]: time="2025-05-16T10:03:46.332727983Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 10:03:46.333554 containerd[1591]: time="2025-05-16T10:03:46.333522682Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 10:03:46.333687 containerd[1591]: time="2025-05-16T10:03:46.333659897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 10:03:46.333687 containerd[1591]: time="2025-05-16T10:03:46.333684824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 10:03:46.333734 containerd[1591]: time="2025-05-16T10:03:46.333694684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 10:03:46.333734 containerd[1591]: time="2025-05-16T10:03:46.333704921Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 10:03:46.333734 containerd[1591]: time="2025-05-16T10:03:46.333715254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 10:03:46.333734 containerd[1591]: time="2025-05-16T10:03:46.333725292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 10:03:46.333807 containerd[1591]: time="2025-05-16T10:03:46.333735098Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 10:03:46.333807 containerd[1591]: time="2025-05-16T10:03:46.333756183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 10:03:46.333807 containerd[1591]: time="2025-05-16T10:03:46.333766830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 10:03:46.333807 containerd[1591]: time="2025-05-16T10:03:46.333777183Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 10:03:46.334408 containerd[1591]: time="2025-05-16T10:03:46.334380287Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 10:03:46.334408 containerd[1591]: time="2025-05-16T10:03:46.334399051Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 10:03:46.334458 containerd[1591]: time="2025-05-16T10:03:46.334408060Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 10:03:46.334458 containerd[1591]: time="2025-05-16T10:03:46.334416996Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 10:03:46.334458 containerd[1591]: time="2025-05-16T10:03:46.334424472Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 10:03:46.334458 containerd[1591]: time="2025-05-16T10:03:46.334434426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 10:03:46.334458 containerd[1591]: time="2025-05-16T10:03:46.334443330Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 10:03:46.334566 containerd[1591]: time="2025-05-16T10:03:46.334460350Z" level=info msg="runtime interface created" May 16 10:03:46.334566 containerd[1591]: time="2025-05-16T10:03:46.334465496Z" level=info msg="created NRI interface" May 16 10:03:46.334566 containerd[1591]: time="2025-05-16T10:03:46.334472804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 10:03:46.334566 containerd[1591]: time="2025-05-16T10:03:46.334482799Z" level=info msg="Connect containerd service" May 16 10:03:46.334566 containerd[1591]: time="2025-05-16T10:03:46.334504923Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 10:03:46.335216 containerd[1591]: time="2025-05-16T10:03:46.335184909Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.430577942Z" level=info msg="Start subscribing containerd event" May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.430642802Z" level=info msg="Start recovering state" May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.430765421Z" level=info msg="Start event monitor" May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.430783943Z" level=info msg="Start cni network conf syncer for default" May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.430797005Z" level=info msg="Start streaming server" May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.430807947Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.430818142Z" level=info msg="runtime interface starting up..." May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.430825724Z" level=info msg="starting plugins..." May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.430843279Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.431268124Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 10:03:46.433553 containerd[1591]: time="2025-05-16T10:03:46.431396194Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 10:03:46.440918 containerd[1591]: time="2025-05-16T10:03:46.438895558Z" level=info msg="containerd successfully booted in 0.132658s" May 16 10:03:46.439136 systemd[1]: Started containerd.service - containerd container runtime. May 16 10:03:46.451355 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 10:03:46.487794 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 10:03:46.492007 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 10:03:46.525727 systemd[1]: issuegen.service: Deactivated successfully. May 16 10:03:46.526122 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 10:03:46.530332 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 10:03:46.580435 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 10:03:46.584754 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 10:03:46.595774 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 10:03:46.597755 systemd[1]: Reached target getty.target - Login Prompts. May 16 10:03:46.740714 tar[1588]: linux-amd64/README.md May 16 10:03:46.770441 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 10:03:47.450008 systemd-networkd[1496]: eth0: Gained IPv6LL May 16 10:03:47.453188 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 10:03:47.455295 systemd[1]: Reached target network-online.target - Network is Online. May 16 10:03:47.458163 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 10:03:47.460871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 10:03:47.475918 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 10:03:47.502830 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 10:03:47.505086 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 10:03:47.505372 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 10:03:47.508913 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 10:03:48.177960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 10:03:48.179721 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 10:03:48.181443 systemd[1]: Startup finished in 3.235s (kernel) + 7.898s (initrd) + 4.695s (userspace) = 15.829s. May 16 10:03:48.209087 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 10:03:48.655680 kubelet[1696]: E0516 10:03:48.655531 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 10:03:48.659452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 10:03:48.659676 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 10:03:48.660037 systemd[1]: kubelet.service: Consumed 962ms CPU time, 252.9M memory peak. May 16 10:03:49.271992 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 10:03:49.273503 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:59088.service - OpenSSH per-connection server daemon (10.0.0.1:59088). May 16 10:03:49.333088 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 59088 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:03:49.334936 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:03:49.341386 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 10:03:49.342628 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 10:03:49.350390 systemd-logind[1576]: New session 1 of user core. May 16 10:03:49.367635 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 10:03:49.370696 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 10:03:49.391975 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 10:03:49.394273 systemd-logind[1576]: New session c1 of user core. May 16 10:03:49.544344 systemd[1713]: Queued start job for default target default.target. May 16 10:03:49.561939 systemd[1713]: Created slice app.slice - User Application Slice. May 16 10:03:49.561968 systemd[1713]: Reached target paths.target - Paths. May 16 10:03:49.562010 systemd[1713]: Reached target timers.target - Timers. May 16 10:03:49.563633 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 10:03:49.574599 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 10:03:49.574764 systemd[1713]: Reached target sockets.target - Sockets. May 16 10:03:49.574818 systemd[1713]: Reached target basic.target - Basic System. May 16 10:03:49.574874 systemd[1713]: Reached target default.target - Main User Target. May 16 10:03:49.574913 systemd[1713]: Startup finished in 174ms. May 16 10:03:49.575171 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 10:03:49.584668 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 10:03:49.655491 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:59096.service - OpenSSH per-connection server daemon (10.0.0.1:59096). May 16 10:03:49.718724 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 59096 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:03:49.720185 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:03:49.724507 systemd-logind[1576]: New session 2 of user core. May 16 10:03:49.741665 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 10:03:49.795061 sshd[1726]: Connection closed by 10.0.0.1 port 59096 May 16 10:03:49.795421 sshd-session[1724]: pam_unix(sshd:session): session closed for user core May 16 10:03:49.807402 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:59096.service: Deactivated successfully. May 16 10:03:49.809124 systemd[1]: session-2.scope: Deactivated successfully. May 16 10:03:49.809959 systemd-logind[1576]: Session 2 logged out. Waiting for processes to exit. May 16 10:03:49.812885 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:59108.service - OpenSSH per-connection server daemon (10.0.0.1:59108). May 16 10:03:49.813653 systemd-logind[1576]: Removed session 2. May 16 10:03:49.869804 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 59108 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:03:49.871166 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:03:49.875173 systemd-logind[1576]: New session 3 of user core. May 16 10:03:49.889667 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 10:03:49.939935 sshd[1734]: Connection closed by 10.0.0.1 port 59108 May 16 10:03:49.940216 sshd-session[1732]: pam_unix(sshd:session): session closed for user core May 16 10:03:49.959046 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:59108.service: Deactivated successfully. May 16 10:03:49.960669 systemd[1]: session-3.scope: Deactivated successfully. May 16 10:03:49.961308 systemd-logind[1576]: Session 3 logged out. Waiting for processes to exit. May 16 10:03:49.963787 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:59116.service - OpenSSH per-connection server daemon (10.0.0.1:59116). May 16 10:03:49.964515 systemd-logind[1576]: Removed session 3. May 16 10:03:50.020284 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 59116 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:03:50.021743 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:03:50.025972 systemd-logind[1576]: New session 4 of user core. May 16 10:03:50.039684 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 10:03:50.093192 sshd[1742]: Connection closed by 10.0.0.1 port 59116 May 16 10:03:50.093446 sshd-session[1740]: pam_unix(sshd:session): session closed for user core May 16 10:03:50.101947 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:59116.service: Deactivated successfully. May 16 10:03:50.103514 systemd[1]: session-4.scope: Deactivated successfully. May 16 10:03:50.104176 systemd-logind[1576]: Session 4 logged out. Waiting for processes to exit. May 16 10:03:50.106626 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:59122.service - OpenSSH per-connection server daemon (10.0.0.1:59122). May 16 10:03:50.107144 systemd-logind[1576]: Removed session 4. May 16 10:03:50.167800 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 59122 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:03:50.169186 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:03:50.173359 systemd-logind[1576]: New session 5 of user core. May 16 10:03:50.181723 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 10:03:50.240740 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 10:03:50.241042 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 10:03:50.256470 sudo[1751]: pam_unix(sudo:session): session closed for user root May 16 10:03:50.257950 sshd[1750]: Connection closed by 10.0.0.1 port 59122 May 16 10:03:50.258290 sshd-session[1748]: pam_unix(sshd:session): session closed for user core May 16 10:03:50.276210 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:59122.service: Deactivated successfully. May 16 10:03:50.277891 systemd[1]: session-5.scope: Deactivated successfully. May 16 10:03:50.278708 systemd-logind[1576]: Session 5 logged out. Waiting for processes to exit. May 16 10:03:50.281841 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:59138.service - OpenSSH per-connection server daemon (10.0.0.1:59138). May 16 10:03:50.282431 systemd-logind[1576]: Removed session 5. May 16 10:03:50.331822 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 59138 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:03:50.333318 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:03:50.337673 systemd-logind[1576]: New session 6 of user core. May 16 10:03:50.347667 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 10:03:50.402890 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 10:03:50.403184 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 10:03:50.410475 sudo[1761]: pam_unix(sudo:session): session closed for user root May 16 10:03:50.416775 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 10:03:50.417093 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 10:03:50.427173 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 10:03:50.473969 augenrules[1783]: No rules May 16 10:03:50.475597 systemd[1]: audit-rules.service: Deactivated successfully. May 16 10:03:50.475895 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 10:03:50.477074 sudo[1760]: pam_unix(sudo:session): session closed for user root May 16 10:03:50.478486 sshd[1759]: Connection closed by 10.0.0.1 port 59138 May 16 10:03:50.478876 sshd-session[1757]: pam_unix(sshd:session): session closed for user core May 16 10:03:50.491294 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:59138.service: Deactivated successfully. May 16 10:03:50.493090 systemd[1]: session-6.scope: Deactivated successfully. May 16 10:03:50.493915 systemd-logind[1576]: Session 6 logged out. Waiting for processes to exit. May 16 10:03:50.496387 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:59144.service - OpenSSH per-connection server daemon (10.0.0.1:59144). May 16 10:03:50.497142 systemd-logind[1576]: Removed session 6. May 16 10:03:50.549735 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 59144 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:03:50.551121 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:03:50.555397 systemd-logind[1576]: New session 7 of user core. May 16 10:03:50.564667 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 10:03:50.618727 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 10:03:50.619086 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 10:03:50.923947 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 10:03:50.941994 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 10:03:51.166805 dockerd[1815]: time="2025-05-16T10:03:51.166725846Z" level=info msg="Starting up" May 16 10:03:51.168472 dockerd[1815]: time="2025-05-16T10:03:51.168424822Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 10:03:51.983099 dockerd[1815]: time="2025-05-16T10:03:51.982323729Z" level=info msg="Loading containers: start." May 16 10:03:52.017005 kernel: Initializing XFRM netlink socket May 16 10:03:52.591850 systemd-networkd[1496]: docker0: Link UP May 16 10:03:52.615281 dockerd[1815]: time="2025-05-16T10:03:52.615192412Z" level=info msg="Loading containers: done." May 16 10:03:52.643379 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1683372790-merged.mount: Deactivated successfully. May 16 10:03:52.654181 dockerd[1815]: time="2025-05-16T10:03:52.654084681Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 10:03:52.654363 dockerd[1815]: time="2025-05-16T10:03:52.654208666Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 10:03:52.654418 dockerd[1815]: time="2025-05-16T10:03:52.654383568Z" level=info msg="Initializing buildkit" May 16 10:03:52.735719 dockerd[1815]: time="2025-05-16T10:03:52.733054153Z" level=info msg="Completed buildkit initialization" May 16 10:03:52.754278 dockerd[1815]: time="2025-05-16T10:03:52.754198169Z" level=info msg="Daemon has completed initialization" May 16 10:03:52.754444 dockerd[1815]: time="2025-05-16T10:03:52.754412834Z" level=info msg="API listen on /run/docker.sock" May 16 10:03:52.754652 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 10:03:53.705547 containerd[1591]: time="2025-05-16T10:03:53.705486553Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 10:03:54.300696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount783437883.mount: Deactivated successfully. May 16 10:03:56.222293 containerd[1591]: time="2025-05-16T10:03:56.221750384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:03:56.225134 containerd[1591]: time="2025-05-16T10:03:56.225018941Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 16 10:03:56.230116 containerd[1591]: time="2025-05-16T10:03:56.230047574Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:03:56.234840 containerd[1591]: time="2025-05-16T10:03:56.234696881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:03:56.239195 containerd[1591]: time="2025-05-16T10:03:56.239045265Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 2.533487704s" May 16 10:03:56.239195 containerd[1591]: time="2025-05-16T10:03:56.239140632Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 16 10:03:56.240963 containerd[1591]: time="2025-05-16T10:03:56.240340133Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 10:03:58.418886 containerd[1591]: time="2025-05-16T10:03:58.418808253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:03:58.420523 containerd[1591]: time="2025-05-16T10:03:58.420453224Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 16 10:03:58.422087 containerd[1591]: time="2025-05-16T10:03:58.422010925Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:03:58.425582 containerd[1591]: time="2025-05-16T10:03:58.425547204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:03:58.426711 containerd[1591]: time="2025-05-16T10:03:58.426660704Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 2.186259481s" May 16 10:03:58.426753 containerd[1591]: time="2025-05-16T10:03:58.426710958Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 16 10:03:58.427372 containerd[1591]: time="2025-05-16T10:03:58.427148939Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 10:03:58.767884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 10:03:58.769396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 10:03:58.964948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 10:03:58.969429 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 10:03:59.106399 kubelet[2094]: E0516 10:03:59.106231 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 10:03:59.113388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 10:03:59.113619 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 10:03:59.114041 systemd[1]: kubelet.service: Consumed 209ms CPU time, 104.6M memory peak. May 16 10:04:00.152196 containerd[1591]: time="2025-05-16T10:04:00.152120535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:00.153059 containerd[1591]: time="2025-05-16T10:04:00.153025143Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 16 10:04:00.154199 containerd[1591]: time="2025-05-16T10:04:00.154113271Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:00.156464 containerd[1591]: time="2025-05-16T10:04:00.156411752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:00.157332 containerd[1591]: time="2025-05-16T10:04:00.157301451Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.730122551s" May 16 10:04:00.157392 containerd[1591]: time="2025-05-16T10:04:00.157333830Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 16 10:04:00.157899 containerd[1591]: time="2025-05-16T10:04:00.157864066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 10:04:01.139193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167518625.mount: Deactivated successfully. May 16 10:04:01.777964 containerd[1591]: time="2025-05-16T10:04:01.777901574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:01.778937 containerd[1591]: time="2025-05-16T10:04:01.778904704Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 16 10:04:01.779939 containerd[1591]: time="2025-05-16T10:04:01.779900553Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:01.781825 containerd[1591]: time="2025-05-16T10:04:01.781790921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:01.782428 containerd[1591]: time="2025-05-16T10:04:01.782381546Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.624486423s" May 16 10:04:01.782428 containerd[1591]: time="2025-05-16T10:04:01.782416113Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 16 10:04:01.782916 containerd[1591]: time="2025-05-16T10:04:01.782881547Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 10:04:02.323435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164565966.mount: Deactivated successfully. May 16 10:04:03.403938 containerd[1591]: time="2025-05-16T10:04:03.403875167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:03.404790 containerd[1591]: time="2025-05-16T10:04:03.404743647Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 10:04:03.405976 containerd[1591]: time="2025-05-16T10:04:03.405942947Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:03.408353 containerd[1591]: time="2025-05-16T10:04:03.408319490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:03.409143 containerd[1591]: time="2025-05-16T10:04:03.409111157Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.626191704s" May 16 10:04:03.409143 containerd[1591]: time="2025-05-16T10:04:03.409141090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 10:04:03.409668 containerd[1591]: time="2025-05-16T10:04:03.409633129Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 10:04:03.897446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330157667.mount: Deactivated successfully. May 16 10:04:03.934069 containerd[1591]: time="2025-05-16T10:04:03.934020120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 10:04:03.939400 containerd[1591]: time="2025-05-16T10:04:03.939353060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 10:04:03.950239 containerd[1591]: time="2025-05-16T10:04:03.950165114Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 10:04:03.953350 containerd[1591]: time="2025-05-16T10:04:03.953308565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 10:04:03.954033 containerd[1591]: time="2025-05-16T10:04:03.953924053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 544.26095ms" May 16 10:04:03.954033 containerd[1591]: time="2025-05-16T10:04:03.953958256Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 10:04:03.954468 containerd[1591]: time="2025-05-16T10:04:03.954418498Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 10:04:04.432381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1475440589.mount: Deactivated successfully. May 16 10:04:06.927940 containerd[1591]: time="2025-05-16T10:04:06.927855813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:06.928595 containerd[1591]: time="2025-05-16T10:04:06.928553298Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 16 10:04:06.930071 containerd[1591]: time="2025-05-16T10:04:06.929878547Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:06.932610 containerd[1591]: time="2025-05-16T10:04:06.932572129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:06.933632 containerd[1591]: time="2025-05-16T10:04:06.933604580Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.97915902s" May 16 10:04:06.933680 containerd[1591]: time="2025-05-16T10:04:06.933635309Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 16 10:04:08.949508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 10:04:08.949691 systemd[1]: kubelet.service: Consumed 209ms CPU time, 104.6M memory peak. May 16 10:04:08.951852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 10:04:08.976408 systemd[1]: Reload requested from client PID 2255 ('systemctl') (unit session-7.scope)... May 16 10:04:08.976425 systemd[1]: Reloading... May 16 10:04:09.063537 zram_generator::config[2301]: No configuration found. May 16 10:04:09.436125 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 10:04:09.558551 systemd[1]: Reloading finished in 581 ms. May 16 10:04:09.630389 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 10:04:09.630496 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 10:04:09.630853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 10:04:09.630901 systemd[1]: kubelet.service: Consumed 139ms CPU time, 91.8M memory peak. May 16 10:04:09.632940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 10:04:09.802429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 10:04:09.819004 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 10:04:09.865271 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 10:04:09.865271 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 10:04:09.865271 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 10:04:09.865740 kubelet[2346]: I0516 10:04:09.865338 2346 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 10:04:10.257674 kubelet[2346]: I0516 10:04:10.257514 2346 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 16 10:04:10.257674 kubelet[2346]: I0516 10:04:10.257574 2346 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 10:04:10.257928 kubelet[2346]: I0516 10:04:10.257907 2346 server.go:954] "Client rotation is on, will bootstrap in background" May 16 10:04:10.303236 kubelet[2346]: E0516 10:04:10.303173 2346 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:10.304217 kubelet[2346]: I0516 10:04:10.304184 2346 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 10:04:10.314408 kubelet[2346]: I0516 10:04:10.314357 2346 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 10:04:10.320980 kubelet[2346]: I0516 10:04:10.320930 2346 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 10:04:10.321317 kubelet[2346]: I0516 10:04:10.321271 2346 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 10:04:10.321497 kubelet[2346]: I0516 10:04:10.321306 2346 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 10:04:10.321721 kubelet[2346]: I0516 10:04:10.321500 2346 topology_manager.go:138] "Creating topology manager with none policy" May 16 10:04:10.321721 kubelet[2346]: I0516 10:04:10.321512 2346 container_manager_linux.go:304] "Creating device plugin manager" May 16 10:04:10.321721 kubelet[2346]: I0516 10:04:10.321695 2346 state_mem.go:36] "Initialized new in-memory state store" May 16 10:04:10.324307 kubelet[2346]: I0516 10:04:10.324277 2346 kubelet.go:446] "Attempting to sync node with API server" May 16 10:04:10.324307 kubelet[2346]: I0516 10:04:10.324299 2346 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 10:04:10.324389 kubelet[2346]: I0516 10:04:10.324329 2346 kubelet.go:352] "Adding apiserver pod source" May 16 10:04:10.324389 kubelet[2346]: I0516 10:04:10.324342 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 10:04:10.333014 kubelet[2346]: W0516 10:04:10.332945 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:10.333171 kubelet[2346]: E0516 10:04:10.333031 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:10.333171 kubelet[2346]: W0516 10:04:10.333058 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:10.333171 kubelet[2346]: E0516 10:04:10.333156 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:10.333612 kubelet[2346]: I0516 10:04:10.333579 2346 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 10:04:10.334316 kubelet[2346]: I0516 10:04:10.334288 2346 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 10:04:10.334958 kubelet[2346]: W0516 10:04:10.334930 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 10:04:10.337713 kubelet[2346]: I0516 10:04:10.337684 2346 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 10:04:10.337785 kubelet[2346]: I0516 10:04:10.337731 2346 server.go:1287] "Started kubelet" May 16 10:04:10.337901 kubelet[2346]: I0516 10:04:10.337860 2346 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 10:04:10.338289 kubelet[2346]: I0516 10:04:10.338230 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 10:04:10.339184 kubelet[2346]: I0516 10:04:10.339159 2346 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 10:04:10.343655 kubelet[2346]: I0516 10:04:10.342691 2346 server.go:490] "Adding debug handlers to kubelet server" May 16 10:04:10.343655 kubelet[2346]: I0516 10:04:10.343362 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 10:04:10.343823 kubelet[2346]: E0516 10:04:10.342319 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ff9cf918072de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 10:04:10.337702622 +0000 UTC m=+0.514688555,LastTimestamp:2025-05-16 10:04:10.337702622 +0000 UTC m=+0.514688555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 10:04:10.344174 kubelet[2346]: I0516 10:04:10.343985 2346 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 10:04:10.344785 kubelet[2346]: E0516 10:04:10.344768 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:10.344876 kubelet[2346]: I0516 10:04:10.344864 2346 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 10:04:10.345054 kubelet[2346]: E0516 10:04:10.345022 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" May 16 10:04:10.345111 kubelet[2346]: I0516 10:04:10.345058 2346 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 16 10:04:10.345253 kubelet[2346]: I0516 10:04:10.345238 2346 reconciler.go:26] "Reconciler: start to sync state" May 16 10:04:10.345411 kubelet[2346]: W0516 10:04:10.345374 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:10.345455 kubelet[2346]: E0516 10:04:10.345426 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:10.345820 kubelet[2346]: E0516 10:04:10.345796 2346 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 10:04:10.346109 kubelet[2346]: I0516 10:04:10.346072 2346 factory.go:221] Registration of the systemd container factory successfully May 16 10:04:10.346350 kubelet[2346]: I0516 10:04:10.346332 2346 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 10:04:10.347821 kubelet[2346]: I0516 10:04:10.347806 2346 factory.go:221] Registration of the containerd container factory successfully May 16 10:04:10.362861 kubelet[2346]: I0516 10:04:10.362776 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 10:04:10.364698 kubelet[2346]: I0516 10:04:10.364645 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 10:04:10.364698 kubelet[2346]: I0516 10:04:10.364672 2346 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 10:04:10.364698 kubelet[2346]: I0516 10:04:10.364695 2346 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 10:04:10.364698 kubelet[2346]: I0516 10:04:10.364704 2346 kubelet.go:2388] "Starting kubelet main sync loop" May 16 10:04:10.364890 kubelet[2346]: E0516 10:04:10.364759 2346 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 10:04:10.368116 kubelet[2346]: W0516 10:04:10.368058 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:10.368321 kubelet[2346]: E0516 10:04:10.368153 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:10.371548 kubelet[2346]: I0516 10:04:10.371502 2346 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 10:04:10.371548 kubelet[2346]: I0516 10:04:10.371551 2346 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 10:04:10.371668 kubelet[2346]: I0516 10:04:10.371570 2346 state_mem.go:36] "Initialized new in-memory state store" May 16 10:04:10.445938 kubelet[2346]: E0516 10:04:10.445877 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:10.465293 kubelet[2346]: E0516 10:04:10.465217 2346 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 10:04:10.546017 kubelet[2346]: E0516 10:04:10.545973 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" May 16 10:04:10.546017 kubelet[2346]: E0516 10:04:10.545985 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:10.646447 kubelet[2346]: E0516 10:04:10.646388 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:10.665743 kubelet[2346]: E0516 10:04:10.665681 2346 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 10:04:10.747397 kubelet[2346]: E0516 10:04:10.747338 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:10.847716 kubelet[2346]: E0516 10:04:10.847585 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:10.947487 kubelet[2346]: E0516 10:04:10.947445 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" May 16 10:04:10.948413 kubelet[2346]: E0516 10:04:10.948375 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:11.049267 kubelet[2346]: E0516 10:04:11.049193 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:11.066609 kubelet[2346]: E0516 10:04:11.066503 2346 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 10:04:11.150287 kubelet[2346]: E0516 10:04:11.150134 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:11.250808 kubelet[2346]: E0516 10:04:11.250742 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:11.257552 kubelet[2346]: W0516 10:04:11.257491 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:11.257619 kubelet[2346]: E0516 10:04:11.257591 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:11.351608 kubelet[2346]: E0516 10:04:11.351563 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:11.452113 kubelet[2346]: E0516 10:04:11.451985 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:11.511804 kubelet[2346]: W0516 10:04:11.511764 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:11.511952 kubelet[2346]: E0516 10:04:11.511810 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:11.552443 kubelet[2346]: E0516 10:04:11.552381 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:11.652941 kubelet[2346]: E0516 10:04:11.652884 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:11.714725 kubelet[2346]: W0516 10:04:11.714597 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:11.714725 kubelet[2346]: E0516 10:04:11.714649 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:11.748774 kubelet[2346]: E0516 10:04:11.748730 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" May 16 10:04:11.753822 kubelet[2346]: E0516 10:04:11.753798 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:11.821752 kubelet[2346]: W0516 10:04:11.821710 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:11.821876 kubelet[2346]: E0516 10:04:11.821753 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:11.854811 kubelet[2346]: E0516 10:04:11.854741 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:11.867024 kubelet[2346]: E0516 10:04:11.866975 2346 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 10:04:11.955631 kubelet[2346]: E0516 10:04:11.955564 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:12.055825 kubelet[2346]: I0516 10:04:12.055745 2346 policy_none.go:49] "None policy: Start" May 16 10:04:12.055825 kubelet[2346]: I0516 10:04:12.055792 2346 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 10:04:12.055825 kubelet[2346]: I0516 10:04:12.055808 2346 state_mem.go:35] "Initializing new in-memory state store" May 16 10:04:12.056059 kubelet[2346]: E0516 10:04:12.056022 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:12.144871 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 10:04:12.156703 kubelet[2346]: E0516 10:04:12.156671 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:12.163248 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 10:04:12.166597 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 10:04:12.174412 kubelet[2346]: I0516 10:04:12.174369 2346 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 10:04:12.174643 kubelet[2346]: I0516 10:04:12.174611 2346 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 10:04:12.174694 kubelet[2346]: I0516 10:04:12.174627 2346 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 10:04:12.175464 kubelet[2346]: I0516 10:04:12.174948 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 10:04:12.176075 kubelet[2346]: E0516 10:04:12.176051 2346 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 10:04:12.176133 kubelet[2346]: E0516 10:04:12.176086 2346 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 10:04:12.275978 kubelet[2346]: I0516 10:04:12.275939 2346 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 10:04:12.276346 kubelet[2346]: E0516 10:04:12.276325 2346 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 16 10:04:12.381286 kubelet[2346]: E0516 10:04:12.381168 2346 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:12.478299 kubelet[2346]: I0516 10:04:12.478234 2346 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 10:04:12.478702 kubelet[2346]: E0516 10:04:12.478656 2346 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 16 10:04:12.880991 kubelet[2346]: I0516 10:04:12.880938 2346 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 10:04:12.881477 kubelet[2346]: E0516 10:04:12.881416 2346 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 16 10:04:13.349811 kubelet[2346]: E0516 10:04:13.349751 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="3.2s" May 16 10:04:13.388600 kubelet[2346]: W0516 10:04:13.388563 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:13.388600 kubelet[2346]: E0516 10:04:13.388605 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:13.476459 systemd[1]: Created slice kubepods-burstable-podc2789426f2d54711763348e0fc412620.slice - libcontainer container kubepods-burstable-podc2789426f2d54711763348e0fc412620.slice. May 16 10:04:13.726148 kubelet[2346]: I0516 10:04:13.563969 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 10:04:13.726148 kubelet[2346]: I0516 10:04:13.564049 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 10:04:13.726148 kubelet[2346]: I0516 10:04:13.564076 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 10:04:13.726148 kubelet[2346]: I0516 10:04:13.564097 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2789426f2d54711763348e0fc412620-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2789426f2d54711763348e0fc412620\") " pod="kube-system/kube-apiserver-localhost" May 16 10:04:13.726148 kubelet[2346]: I0516 10:04:13.564116 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2789426f2d54711763348e0fc412620-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2789426f2d54711763348e0fc412620\") " pod="kube-system/kube-apiserver-localhost" May 16 10:04:13.726348 kubelet[2346]: I0516 10:04:13.564136 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 10:04:13.726348 kubelet[2346]: I0516 10:04:13.564156 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 10:04:13.726348 kubelet[2346]: I0516 10:04:13.564177 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2789426f2d54711763348e0fc412620-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c2789426f2d54711763348e0fc412620\") " pod="kube-system/kube-apiserver-localhost" May 16 10:04:13.726348 kubelet[2346]: I0516 10:04:13.564201 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 10:04:13.729907 kubelet[2346]: I0516 10:04:13.729874 2346 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 10:04:13.730228 kubelet[2346]: E0516 10:04:13.730192 2346 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 16 10:04:13.740227 kubelet[2346]: E0516 10:04:13.740188 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 10:04:13.740647 kubelet[2346]: E0516 10:04:13.740625 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:13.741336 containerd[1591]: time="2025-05-16T10:04:13.741287819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c2789426f2d54711763348e0fc412620,Namespace:kube-system,Attempt:0,}" May 16 10:04:13.743766 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 16 10:04:13.755163 kubelet[2346]: E0516 10:04:13.755118 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 10:04:13.755511 kubelet[2346]: E0516 10:04:13.755466 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:13.756003 containerd[1591]: time="2025-05-16T10:04:13.755969987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 16 10:04:13.758187 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 16 10:04:13.760178 kubelet[2346]: E0516 10:04:13.760144 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 10:04:13.760458 kubelet[2346]: E0516 10:04:13.760426 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:13.760838 containerd[1591]: time="2025-05-16T10:04:13.760801639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 16 10:04:14.026474 kubelet[2346]: W0516 10:04:14.026395 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:14.026474 kubelet[2346]: E0516 10:04:14.026473 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:14.137705 kubelet[2346]: W0516 10:04:14.137629 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:14.137705 kubelet[2346]: E0516 10:04:14.137695 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:14.496020 kubelet[2346]: W0516 10:04:14.495873 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 16 10:04:14.496020 kubelet[2346]: E0516 10:04:14.495949 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" May 16 10:04:15.331946 kubelet[2346]: I0516 10:04:15.331884 2346 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 10:04:15.332311 kubelet[2346]: E0516 10:04:15.332264 2346 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 16 10:04:15.469804 containerd[1591]: time="2025-05-16T10:04:15.469755884Z" level=info msg="connecting to shim c55f0ebc08230da182b9cea752d6acf399ffae13b07db0b6a7c1d0d6a61ce920" address="unix:///run/containerd/s/d5f869faaf70cab97329d37962d9780518ae0721eef1a5aac98a65b9e4c30942" namespace=k8s.io protocol=ttrpc version=3 May 16 10:04:15.513709 containerd[1591]: time="2025-05-16T10:04:15.513661170Z" level=info msg="connecting to shim 652974465f6d98a99b1279c7929b050b6d3a34eca45bd4c6f967049442f80ddc" address="unix:///run/containerd/s/9e5cd50477e5da1014e50fea3a8c60dc08cbe65192a0b7f6a461cf2bcb9d89c9" namespace=k8s.io protocol=ttrpc version=3 May 16 10:04:15.517195 systemd[1]: Started cri-containerd-c55f0ebc08230da182b9cea752d6acf399ffae13b07db0b6a7c1d0d6a61ce920.scope - libcontainer container c55f0ebc08230da182b9cea752d6acf399ffae13b07db0b6a7c1d0d6a61ce920. May 16 10:04:15.522818 containerd[1591]: time="2025-05-16T10:04:15.522754099Z" level=info msg="connecting to shim 9638567001c193fda85f01e7466e086927fd3011f9716204508f068ca425615c" address="unix:///run/containerd/s/5431d4ec7dd3fe83324ea10cc9a9672c4f57ee3710616c75562c2986a8891730" namespace=k8s.io protocol=ttrpc version=3 May 16 10:04:15.644145 systemd[1]: Started cri-containerd-652974465f6d98a99b1279c7929b050b6d3a34eca45bd4c6f967049442f80ddc.scope - libcontainer container 652974465f6d98a99b1279c7929b050b6d3a34eca45bd4c6f967049442f80ddc. May 16 10:04:15.670687 systemd[1]: Started cri-containerd-9638567001c193fda85f01e7466e086927fd3011f9716204508f068ca425615c.scope - libcontainer container 9638567001c193fda85f01e7466e086927fd3011f9716204508f068ca425615c. May 16 10:04:15.696764 containerd[1591]: time="2025-05-16T10:04:15.696716469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c2789426f2d54711763348e0fc412620,Namespace:kube-system,Attempt:0,} returns sandbox id \"c55f0ebc08230da182b9cea752d6acf399ffae13b07db0b6a7c1d0d6a61ce920\"" May 16 10:04:15.698044 kubelet[2346]: E0516 10:04:15.698019 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:15.701241 containerd[1591]: time="2025-05-16T10:04:15.701205704Z" level=info msg="CreateContainer within sandbox \"c55f0ebc08230da182b9cea752d6acf399ffae13b07db0b6a7c1d0d6a61ce920\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 10:04:15.720384 containerd[1591]: time="2025-05-16T10:04:15.720327882Z" level=info msg="Container f9f057514b03218b2fa0c52b3c29a9d788cd9adb56648941abf5305c5fd4a853: CDI devices from CRI Config.CDIDevices: []" May 16 10:04:15.745243 containerd[1591]: time="2025-05-16T10:04:15.745182976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9638567001c193fda85f01e7466e086927fd3011f9716204508f068ca425615c\"" May 16 10:04:15.746021 kubelet[2346]: E0516 10:04:15.745988 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:15.747031 containerd[1591]: time="2025-05-16T10:04:15.746990910Z" level=info msg="CreateContainer within sandbox \"c55f0ebc08230da182b9cea752d6acf399ffae13b07db0b6a7c1d0d6a61ce920\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9f057514b03218b2fa0c52b3c29a9d788cd9adb56648941abf5305c5fd4a853\"" May 16 10:04:15.747622 containerd[1591]: time="2025-05-16T10:04:15.747570686Z" level=info msg="StartContainer for \"f9f057514b03218b2fa0c52b3c29a9d788cd9adb56648941abf5305c5fd4a853\"" May 16 10:04:15.747940 containerd[1591]: time="2025-05-16T10:04:15.747894175Z" level=info msg="CreateContainer within sandbox \"9638567001c193fda85f01e7466e086927fd3011f9716204508f068ca425615c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 10:04:15.748866 containerd[1591]: time="2025-05-16T10:04:15.748822852Z" level=info msg="connecting to shim f9f057514b03218b2fa0c52b3c29a9d788cd9adb56648941abf5305c5fd4a853" address="unix:///run/containerd/s/d5f869faaf70cab97329d37962d9780518ae0721eef1a5aac98a65b9e4c30942" protocol=ttrpc version=3 May 16 10:04:15.757598 containerd[1591]: time="2025-05-16T10:04:15.757498362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"652974465f6d98a99b1279c7929b050b6d3a34eca45bd4c6f967049442f80ddc\"" May 16 10:04:15.758330 kubelet[2346]: E0516 10:04:15.758305 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:15.893920 containerd[1591]: time="2025-05-16T10:04:15.893823690Z" level=info msg="Container 6c12b010cd128b91a3c57158a853646ed10fece17e2759b91d88e626301742b0: CDI devices from CRI Config.CDIDevices: []" May 16 10:04:15.894336 containerd[1591]: time="2025-05-16T10:04:15.894303449Z" level=info msg="CreateContainer within sandbox \"652974465f6d98a99b1279c7929b050b6d3a34eca45bd4c6f967049442f80ddc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 10:04:15.906691 containerd[1591]: time="2025-05-16T10:04:15.906545313Z" level=info msg="CreateContainer within sandbox \"9638567001c193fda85f01e7466e086927fd3011f9716204508f068ca425615c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6c12b010cd128b91a3c57158a853646ed10fece17e2759b91d88e626301742b0\"" May 16 10:04:15.906991 containerd[1591]: time="2025-05-16T10:04:15.906963244Z" level=info msg="StartContainer for \"6c12b010cd128b91a3c57158a853646ed10fece17e2759b91d88e626301742b0\"" May 16 10:04:15.907803 systemd[1]: Started cri-containerd-f9f057514b03218b2fa0c52b3c29a9d788cd9adb56648941abf5305c5fd4a853.scope - libcontainer container f9f057514b03218b2fa0c52b3c29a9d788cd9adb56648941abf5305c5fd4a853. May 16 10:04:15.908483 containerd[1591]: time="2025-05-16T10:04:15.908455250Z" level=info msg="connecting to shim 6c12b010cd128b91a3c57158a853646ed10fece17e2759b91d88e626301742b0" address="unix:///run/containerd/s/5431d4ec7dd3fe83324ea10cc9a9672c4f57ee3710616c75562c2986a8891730" protocol=ttrpc version=3 May 16 10:04:15.913430 containerd[1591]: time="2025-05-16T10:04:15.912745753Z" level=info msg="Container cf6f8bc9a6f7f8ac94d75476b9e8df6ed5cc30b7c06100323ead40a036907d40: CDI devices from CRI Config.CDIDevices: []" May 16 10:04:15.922124 containerd[1591]: time="2025-05-16T10:04:15.922073617Z" level=info msg="CreateContainer within sandbox \"652974465f6d98a99b1279c7929b050b6d3a34eca45bd4c6f967049442f80ddc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cf6f8bc9a6f7f8ac94d75476b9e8df6ed5cc30b7c06100323ead40a036907d40\"" May 16 10:04:15.923074 containerd[1591]: time="2025-05-16T10:04:15.923042621Z" level=info msg="StartContainer for \"cf6f8bc9a6f7f8ac94d75476b9e8df6ed5cc30b7c06100323ead40a036907d40\"" May 16 10:04:15.924580 containerd[1591]: time="2025-05-16T10:04:15.924540614Z" level=info msg="connecting to shim cf6f8bc9a6f7f8ac94d75476b9e8df6ed5cc30b7c06100323ead40a036907d40" address="unix:///run/containerd/s/9e5cd50477e5da1014e50fea3a8c60dc08cbe65192a0b7f6a461cf2bcb9d89c9" protocol=ttrpc version=3 May 16 10:04:15.934747 systemd[1]: Started cri-containerd-6c12b010cd128b91a3c57158a853646ed10fece17e2759b91d88e626301742b0.scope - libcontainer container 6c12b010cd128b91a3c57158a853646ed10fece17e2759b91d88e626301742b0. May 16 10:04:15.957873 systemd[1]: Started cri-containerd-cf6f8bc9a6f7f8ac94d75476b9e8df6ed5cc30b7c06100323ead40a036907d40.scope - libcontainer container cf6f8bc9a6f7f8ac94d75476b9e8df6ed5cc30b7c06100323ead40a036907d40. May 16 10:04:15.986639 containerd[1591]: time="2025-05-16T10:04:15.986583259Z" level=info msg="StartContainer for \"f9f057514b03218b2fa0c52b3c29a9d788cd9adb56648941abf5305c5fd4a853\" returns successfully" May 16 10:04:16.028109 containerd[1591]: time="2025-05-16T10:04:16.028043981Z" level=info msg="StartContainer for \"6c12b010cd128b91a3c57158a853646ed10fece17e2759b91d88e626301742b0\" returns successfully" May 16 10:04:16.040090 containerd[1591]: time="2025-05-16T10:04:16.040031994Z" level=info msg="StartContainer for \"cf6f8bc9a6f7f8ac94d75476b9e8df6ed5cc30b7c06100323ead40a036907d40\" returns successfully" May 16 10:04:16.382042 kubelet[2346]: E0516 10:04:16.381849 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 10:04:16.382042 kubelet[2346]: E0516 10:04:16.381959 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:16.384025 kubelet[2346]: E0516 10:04:16.384008 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 10:04:16.384126 kubelet[2346]: E0516 10:04:16.384115 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:16.385681 kubelet[2346]: E0516 10:04:16.385664 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 10:04:16.385954 kubelet[2346]: E0516 10:04:16.385904 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:16.819057 kubelet[2346]: E0516 10:04:16.819015 2346 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 10:04:17.387405 kubelet[2346]: E0516 10:04:17.387373 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 10:04:17.387577 kubelet[2346]: E0516 10:04:17.387499 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:17.387577 kubelet[2346]: E0516 10:04:17.387503 2346 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 10:04:17.387726 kubelet[2346]: E0516 10:04:17.387653 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:17.561250 kubelet[2346]: E0516 10:04:17.561213 2346 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 16 10:04:18.042269 kubelet[2346]: E0516 10:04:18.042227 2346 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 16 10:04:18.467473 kubelet[2346]: E0516 10:04:18.467356 2346 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 16 10:04:18.533768 kubelet[2346]: I0516 10:04:18.533740 2346 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 10:04:18.538353 kubelet[2346]: I0516 10:04:18.538320 2346 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 16 10:04:18.538353 kubelet[2346]: E0516 10:04:18.538349 2346 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 10:04:18.540920 kubelet[2346]: E0516 10:04:18.540898 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:18.641417 kubelet[2346]: E0516 10:04:18.641355 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:18.742449 kubelet[2346]: E0516 10:04:18.742320 2346 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 10:04:18.845024 kubelet[2346]: I0516 10:04:18.844941 2346 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 10:04:18.852068 kubelet[2346]: I0516 10:04:18.852006 2346 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 10:04:18.857444 kubelet[2346]: I0516 10:04:18.857390 2346 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 10:04:18.908221 systemd[1]: Reload requested from client PID 2624 ('systemctl') (unit session-7.scope)... May 16 10:04:18.908238 systemd[1]: Reloading... May 16 10:04:19.000564 zram_generator::config[2670]: No configuration found. May 16 10:04:19.043284 kubelet[2346]: I0516 10:04:19.043241 2346 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 10:04:19.047281 kubelet[2346]: E0516 10:04:19.047227 2346 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 10:04:19.047439 kubelet[2346]: E0516 10:04:19.047420 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:19.086698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 10:04:19.215721 systemd[1]: Reloading finished in 307 ms. May 16 10:04:19.248263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 10:04:19.266826 systemd[1]: kubelet.service: Deactivated successfully. May 16 10:04:19.267133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 10:04:19.267203 systemd[1]: kubelet.service: Consumed 1.510s CPU time, 126.2M memory peak. May 16 10:04:19.276354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 10:04:19.563976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 10:04:19.573849 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 10:04:19.614374 kubelet[2712]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 10:04:19.614374 kubelet[2712]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 10:04:19.614374 kubelet[2712]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 10:04:19.614772 kubelet[2712]: I0516 10:04:19.614453 2712 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 10:04:19.622580 kubelet[2712]: I0516 10:04:19.621890 2712 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 16 10:04:19.622580 kubelet[2712]: I0516 10:04:19.621917 2712 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 10:04:19.622580 kubelet[2712]: I0516 10:04:19.622266 2712 server.go:954] "Client rotation is on, will bootstrap in background" May 16 10:04:19.624627 kubelet[2712]: I0516 10:04:19.624586 2712 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 10:04:19.627390 kubelet[2712]: I0516 10:04:19.627311 2712 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 10:04:19.631344 kubelet[2712]: I0516 10:04:19.631311 2712 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 10:04:19.636396 kubelet[2712]: I0516 10:04:19.636362 2712 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 10:04:19.636671 kubelet[2712]: I0516 10:04:19.636633 2712 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 10:04:19.636839 kubelet[2712]: I0516 10:04:19.636663 2712 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 10:04:19.636955 kubelet[2712]: I0516 10:04:19.636840 2712 topology_manager.go:138] "Creating topology manager with none policy" May 16 10:04:19.636955 kubelet[2712]: I0516 10:04:19.636849 2712 container_manager_linux.go:304] "Creating device plugin manager" May 16 10:04:19.636955 kubelet[2712]: I0516 10:04:19.636902 2712 state_mem.go:36] "Initialized new in-memory state store" May 16 10:04:19.637069 kubelet[2712]: I0516 10:04:19.637056 2712 kubelet.go:446] "Attempting to sync node with API server" May 16 10:04:19.637069 kubelet[2712]: I0516 10:04:19.637069 2712 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 10:04:19.637122 kubelet[2712]: I0516 10:04:19.637088 2712 kubelet.go:352] "Adding apiserver pod source" May 16 10:04:19.637122 kubelet[2712]: I0516 10:04:19.637098 2712 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 10:04:19.745600 kubelet[2712]: I0516 10:04:19.744110 2712 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 10:04:19.745600 kubelet[2712]: I0516 10:04:19.744626 2712 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 10:04:19.745600 kubelet[2712]: I0516 10:04:19.745208 2712 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 10:04:19.745600 kubelet[2712]: I0516 10:04:19.745240 2712 server.go:1287] "Started kubelet" May 16 10:04:19.753653 kubelet[2712]: I0516 10:04:19.753631 2712 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 10:04:19.756366 kubelet[2712]: I0516 10:04:19.755932 2712 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 10:04:19.758743 kubelet[2712]: I0516 10:04:19.757371 2712 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 10:04:19.758743 kubelet[2712]: I0516 10:04:19.757798 2712 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 10:04:19.758743 kubelet[2712]: I0516 10:04:19.758735 2712 server.go:490] "Adding debug handlers to kubelet server" May 16 10:04:19.761493 kubelet[2712]: I0516 10:04:19.761472 2712 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 10:04:19.765608 kubelet[2712]: I0516 10:04:19.765585 2712 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 10:04:19.765746 kubelet[2712]: I0516 10:04:19.765731 2712 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 16 10:04:19.765872 kubelet[2712]: I0516 10:04:19.765859 2712 reconciler.go:26] "Reconciler: start to sync state" May 16 10:04:19.766529 kubelet[2712]: I0516 10:04:19.766502 2712 factory.go:221] Registration of the systemd container factory successfully May 16 10:04:19.766661 kubelet[2712]: I0516 10:04:19.766628 2712 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 10:04:19.767262 kubelet[2712]: E0516 10:04:19.767238 2712 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 10:04:19.767539 kubelet[2712]: I0516 10:04:19.767461 2712 factory.go:221] Registration of the containerd container factory successfully May 16 10:04:19.774546 kubelet[2712]: I0516 10:04:19.774262 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 10:04:19.775426 kubelet[2712]: I0516 10:04:19.775399 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 10:04:19.775426 kubelet[2712]: I0516 10:04:19.775426 2712 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 10:04:19.775530 kubelet[2712]: I0516 10:04:19.775446 2712 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 10:04:19.775530 kubelet[2712]: I0516 10:04:19.775454 2712 kubelet.go:2388] "Starting kubelet main sync loop" May 16 10:04:19.775530 kubelet[2712]: E0516 10:04:19.775496 2712 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 10:04:19.819823 kubelet[2712]: I0516 10:04:19.817565 2712 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 10:04:19.819823 kubelet[2712]: I0516 10:04:19.818400 2712 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 10:04:19.819823 kubelet[2712]: I0516 10:04:19.818422 2712 state_mem.go:36] "Initialized new in-memory state store" May 16 10:04:19.819823 kubelet[2712]: I0516 10:04:19.818607 2712 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 10:04:19.819823 kubelet[2712]: I0516 10:04:19.818620 2712 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 10:04:19.819823 kubelet[2712]: I0516 10:04:19.818638 2712 policy_none.go:49] "None policy: Start" May 16 10:04:19.819823 kubelet[2712]: I0516 10:04:19.818646 2712 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 10:04:19.819823 kubelet[2712]: I0516 10:04:19.818658 2712 state_mem.go:35] "Initializing new in-memory state store" May 16 10:04:19.819823 kubelet[2712]: I0516 10:04:19.818746 2712 state_mem.go:75] "Updated machine memory state" May 16 10:04:19.824956 kubelet[2712]: I0516 10:04:19.824914 2712 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 10:04:19.825330 kubelet[2712]: I0516 10:04:19.825184 2712 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 10:04:19.825330 kubelet[2712]: I0516 10:04:19.825204 2712 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 10:04:19.826389 kubelet[2712]: I0516 10:04:19.825680 2712 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 10:04:19.828492 kubelet[2712]: E0516 10:04:19.828436 2712 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 10:04:19.877237 kubelet[2712]: I0516 10:04:19.876995 2712 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 10:04:19.877237 kubelet[2712]: I0516 10:04:19.877050 2712 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 10:04:19.877851 kubelet[2712]: I0516 10:04:19.877829 2712 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 10:04:19.885867 kubelet[2712]: E0516 10:04:19.885744 2712 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 10:04:19.885867 kubelet[2712]: E0516 10:04:19.885764 2712 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 10:04:19.885955 kubelet[2712]: E0516 10:04:19.885885 2712 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 10:04:19.937321 kubelet[2712]: I0516 10:04:19.937291 2712 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 10:04:19.948544 kubelet[2712]: I0516 10:04:19.948072 2712 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 16 10:04:19.948544 kubelet[2712]: I0516 10:04:19.948180 2712 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 16 10:04:20.071042 kubelet[2712]: I0516 10:04:20.070892 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 10:04:20.071042 kubelet[2712]: I0516 10:04:20.070953 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2789426f2d54711763348e0fc412620-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2789426f2d54711763348e0fc412620\") " pod="kube-system/kube-apiserver-localhost" May 16 10:04:20.071561 kubelet[2712]: I0516 10:04:20.071064 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2789426f2d54711763348e0fc412620-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c2789426f2d54711763348e0fc412620\") " pod="kube-system/kube-apiserver-localhost" May 16 10:04:20.072271 kubelet[2712]: I0516 10:04:20.071770 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 10:04:20.072271 kubelet[2712]: I0516 10:04:20.072018 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 10:04:20.072271 kubelet[2712]: I0516 10:04:20.072254 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 10:04:20.072381 kubelet[2712]: I0516 10:04:20.072286 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2789426f2d54711763348e0fc412620-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c2789426f2d54711763348e0fc412620\") " pod="kube-system/kube-apiserver-localhost" May 16 10:04:20.072939 kubelet[2712]: I0516 10:04:20.072915 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 10:04:20.072989 kubelet[2712]: I0516 10:04:20.072972 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 10:04:20.186804 kubelet[2712]: E0516 10:04:20.186752 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:20.186804 kubelet[2712]: E0516 10:04:20.186786 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:20.186952 kubelet[2712]: E0516 10:04:20.186930 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:20.645529 kubelet[2712]: I0516 10:04:20.645452 2712 apiserver.go:52] "Watching apiserver" May 16 10:04:20.669481 kubelet[2712]: I0516 10:04:20.669422 2712 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 16 10:04:20.763319 kubelet[2712]: I0516 10:04:20.763162 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.763142084 podStartE2EDuration="2.763142084s" podCreationTimestamp="2025-05-16 10:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 10:04:20.746078198 +0000 UTC m=+1.168222111" watchObservedRunningTime="2025-05-16 10:04:20.763142084 +0000 UTC m=+1.185285987" May 16 10:04:20.774529 kubelet[2712]: I0516 10:04:20.774433 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.77441633 podStartE2EDuration="2.77441633s" podCreationTimestamp="2025-05-16 10:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 10:04:20.763651366 +0000 UTC m=+1.185795279" watchObservedRunningTime="2025-05-16 10:04:20.77441633 +0000 UTC m=+1.196560243" May 16 10:04:20.774674 kubelet[2712]: I0516 10:04:20.774573 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.774567589 podStartE2EDuration="2.774567589s" podCreationTimestamp="2025-05-16 10:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 10:04:20.773867061 +0000 UTC m=+1.196010974" watchObservedRunningTime="2025-05-16 10:04:20.774567589 +0000 UTC m=+1.196711503" May 16 10:04:20.791792 kubelet[2712]: I0516 10:04:20.791737 2712 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 10:04:20.792126 kubelet[2712]: E0516 10:04:20.792091 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:20.792173 kubelet[2712]: E0516 10:04:20.792152 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:20.798097 kubelet[2712]: E0516 10:04:20.797991 2712 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 10:04:20.798286 kubelet[2712]: E0516 10:04:20.798195 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:21.794213 kubelet[2712]: E0516 10:04:21.794169 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:21.795701 kubelet[2712]: E0516 10:04:21.795674 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:24.397862 sudo[1795]: pam_unix(sudo:session): session closed for user root May 16 10:04:24.400957 sshd[1794]: Connection closed by 10.0.0.1 port 59144 May 16 10:04:24.400845 sshd-session[1792]: pam_unix(sshd:session): session closed for user core May 16 10:04:24.407961 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:59144.service: Deactivated successfully. May 16 10:04:24.411078 systemd[1]: session-7.scope: Deactivated successfully. May 16 10:04:24.411470 systemd[1]: session-7.scope: Consumed 4.237s CPU time, 218.7M memory peak. May 16 10:04:24.412987 systemd-logind[1576]: Session 7 logged out. Waiting for processes to exit. May 16 10:04:24.414700 systemd-logind[1576]: Removed session 7. May 16 10:04:24.460241 kubelet[2712]: E0516 10:04:24.460202 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:24.491958 kubelet[2712]: I0516 10:04:24.491924 2712 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 10:04:24.492235 containerd[1591]: time="2025-05-16T10:04:24.492193793Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 10:04:24.492616 kubelet[2712]: I0516 10:04:24.492454 2712 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 10:04:25.707063 systemd[1]: Created slice kubepods-besteffort-pod514f971f_6ab5_4e51_9b65_1413a80d8e06.slice - libcontainer container kubepods-besteffort-pod514f971f_6ab5_4e51_9b65_1413a80d8e06.slice. May 16 10:04:25.808957 kubelet[2712]: I0516 10:04:25.808905 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/514f971f-6ab5-4e51-9b65-1413a80d8e06-kube-proxy\") pod \"kube-proxy-bfp68\" (UID: \"514f971f-6ab5-4e51-9b65-1413a80d8e06\") " pod="kube-system/kube-proxy-bfp68" May 16 10:04:25.808957 kubelet[2712]: I0516 10:04:25.808949 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/514f971f-6ab5-4e51-9b65-1413a80d8e06-xtables-lock\") pod \"kube-proxy-bfp68\" (UID: \"514f971f-6ab5-4e51-9b65-1413a80d8e06\") " pod="kube-system/kube-proxy-bfp68" May 16 10:04:25.808957 kubelet[2712]: I0516 10:04:25.808968 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/514f971f-6ab5-4e51-9b65-1413a80d8e06-lib-modules\") pod \"kube-proxy-bfp68\" (UID: \"514f971f-6ab5-4e51-9b65-1413a80d8e06\") " pod="kube-system/kube-proxy-bfp68" May 16 10:04:25.809505 kubelet[2712]: I0516 10:04:25.808986 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5lrq\" (UniqueName: \"kubernetes.io/projected/514f971f-6ab5-4e51-9b65-1413a80d8e06-kube-api-access-t5lrq\") pod \"kube-proxy-bfp68\" (UID: \"514f971f-6ab5-4e51-9b65-1413a80d8e06\") " pod="kube-system/kube-proxy-bfp68" May 16 10:04:26.025681 kubelet[2712]: E0516 10:04:26.025641 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:26.026222 containerd[1591]: time="2025-05-16T10:04:26.026185791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bfp68,Uid:514f971f-6ab5-4e51-9b65-1413a80d8e06,Namespace:kube-system,Attempt:0,}" May 16 10:04:26.104999 systemd[1]: Created slice kubepods-besteffort-podd72f01b1_085d_4c80_a932_76feeb9e6fd8.slice - libcontainer container kubepods-besteffort-podd72f01b1_085d_4c80_a932_76feeb9e6fd8.slice. May 16 10:04:26.110921 kubelet[2712]: I0516 10:04:26.110889 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d72f01b1-085d-4c80-a932-76feeb9e6fd8-var-lib-calico\") pod \"tigera-operator-789496d6f5-j7422\" (UID: \"d72f01b1-085d-4c80-a932-76feeb9e6fd8\") " pod="tigera-operator/tigera-operator-789496d6f5-j7422" May 16 10:04:26.111007 kubelet[2712]: I0516 10:04:26.110926 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfqsr\" (UniqueName: \"kubernetes.io/projected/d72f01b1-085d-4c80-a932-76feeb9e6fd8-kube-api-access-wfqsr\") pod \"tigera-operator-789496d6f5-j7422\" (UID: \"d72f01b1-085d-4c80-a932-76feeb9e6fd8\") " pod="tigera-operator/tigera-operator-789496d6f5-j7422" May 16 10:04:26.708129 containerd[1591]: time="2025-05-16T10:04:26.708080176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-j7422,Uid:d72f01b1-085d-4c80-a932-76feeb9e6fd8,Namespace:tigera-operator,Attempt:0,}" May 16 10:04:26.951215 containerd[1591]: time="2025-05-16T10:04:26.951156535Z" level=info msg="connecting to shim 530be95c88b12768a11e3dc10cf4550210b67f9d1527022369ad2a62204efbb2" address="unix:///run/containerd/s/43c08f3ff859b96a91e2723c9360e0d47f0a2b1d57f829d3d385f5ba0b8a4b3f" namespace=k8s.io protocol=ttrpc version=3 May 16 10:04:26.952925 containerd[1591]: time="2025-05-16T10:04:26.952892879Z" level=info msg="connecting to shim c582edd2c9748607739789a8ca3cfe61f59f4ee06815260587cb620c33121352" address="unix:///run/containerd/s/13c3b32a2b46f195f00086ec7474a4b9fe71f9c8546512fba3439d3769644ed0" namespace=k8s.io protocol=ttrpc version=3 May 16 10:04:26.976871 systemd[1]: Started cri-containerd-530be95c88b12768a11e3dc10cf4550210b67f9d1527022369ad2a62204efbb2.scope - libcontainer container 530be95c88b12768a11e3dc10cf4550210b67f9d1527022369ad2a62204efbb2. May 16 10:04:27.012652 systemd[1]: Started cri-containerd-c582edd2c9748607739789a8ca3cfe61f59f4ee06815260587cb620c33121352.scope - libcontainer container c582edd2c9748607739789a8ca3cfe61f59f4ee06815260587cb620c33121352. May 16 10:04:27.045374 containerd[1591]: time="2025-05-16T10:04:27.045300620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bfp68,Uid:514f971f-6ab5-4e51-9b65-1413a80d8e06,Namespace:kube-system,Attempt:0,} returns sandbox id \"c582edd2c9748607739789a8ca3cfe61f59f4ee06815260587cb620c33121352\"" May 16 10:04:27.046343 kubelet[2712]: E0516 10:04:27.046307 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:27.053674 containerd[1591]: time="2025-05-16T10:04:27.053611943Z" level=info msg="CreateContainer within sandbox \"c582edd2c9748607739789a8ca3cfe61f59f4ee06815260587cb620c33121352\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 10:04:27.072312 containerd[1591]: time="2025-05-16T10:04:27.072246598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-j7422,Uid:d72f01b1-085d-4c80-a932-76feeb9e6fd8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"530be95c88b12768a11e3dc10cf4550210b67f9d1527022369ad2a62204efbb2\"" May 16 10:04:27.074052 containerd[1591]: time="2025-05-16T10:04:27.074026486Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 16 10:04:27.076904 containerd[1591]: time="2025-05-16T10:04:27.076083649Z" level=info msg="Container eeae7e783aaf6da1934da857dc69e715429134b2f41cd1754446df83a3ced169: CDI devices from CRI Config.CDIDevices: []" May 16 10:04:27.085760 containerd[1591]: time="2025-05-16T10:04:27.085722455Z" level=info msg="CreateContainer within sandbox \"c582edd2c9748607739789a8ca3cfe61f59f4ee06815260587cb620c33121352\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eeae7e783aaf6da1934da857dc69e715429134b2f41cd1754446df83a3ced169\"" May 16 10:04:27.086192 containerd[1591]: time="2025-05-16T10:04:27.086170380Z" level=info msg="StartContainer for \"eeae7e783aaf6da1934da857dc69e715429134b2f41cd1754446df83a3ced169\"" May 16 10:04:27.087718 containerd[1591]: time="2025-05-16T10:04:27.087690739Z" level=info msg="connecting to shim eeae7e783aaf6da1934da857dc69e715429134b2f41cd1754446df83a3ced169" address="unix:///run/containerd/s/13c3b32a2b46f195f00086ec7474a4b9fe71f9c8546512fba3439d3769644ed0" protocol=ttrpc version=3 May 16 10:04:27.107664 systemd[1]: Started cri-containerd-eeae7e783aaf6da1934da857dc69e715429134b2f41cd1754446df83a3ced169.scope - libcontainer container eeae7e783aaf6da1934da857dc69e715429134b2f41cd1754446df83a3ced169. May 16 10:04:27.150323 containerd[1591]: time="2025-05-16T10:04:27.150264987Z" level=info msg="StartContainer for \"eeae7e783aaf6da1934da857dc69e715429134b2f41cd1754446df83a3ced169\" returns successfully" May 16 10:04:27.804797 kubelet[2712]: E0516 10:04:27.804764 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:27.813151 kubelet[2712]: I0516 10:04:27.813089 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bfp68" podStartSLOduration=2.813067825 podStartE2EDuration="2.813067825s" podCreationTimestamp="2025-05-16 10:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 10:04:27.812996731 +0000 UTC m=+8.235140654" watchObservedRunningTime="2025-05-16 10:04:27.813067825 +0000 UTC m=+8.235211738" May 16 10:04:28.382594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891759189.mount: Deactivated successfully. May 16 10:04:28.482927 kubelet[2712]: E0516 10:04:28.482893 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:28.685845 containerd[1591]: time="2025-05-16T10:04:28.685719717Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:28.686748 containerd[1591]: time="2025-05-16T10:04:28.686695026Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 16 10:04:28.688121 containerd[1591]: time="2025-05-16T10:04:28.688084929Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:28.689668 kubelet[2712]: E0516 10:04:28.689607 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:28.691003 containerd[1591]: time="2025-05-16T10:04:28.690972728Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:28.691825 containerd[1591]: time="2025-05-16T10:04:28.691787990Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.617709039s" May 16 10:04:28.691825 containerd[1591]: time="2025-05-16T10:04:28.691818287Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 16 10:04:28.694009 containerd[1591]: time="2025-05-16T10:04:28.693974597Z" level=info msg="CreateContainer within sandbox \"530be95c88b12768a11e3dc10cf4550210b67f9d1527022369ad2a62204efbb2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 10:04:28.704678 containerd[1591]: time="2025-05-16T10:04:28.704252949Z" level=info msg="Container 4e862e6e59323acfbc4e4fb13ffc17e217200389091e9e0cd739c4071acf0060: CDI devices from CRI Config.CDIDevices: []" May 16 10:04:28.711792 containerd[1591]: time="2025-05-16T10:04:28.711742042Z" level=info msg="CreateContainer within sandbox \"530be95c88b12768a11e3dc10cf4550210b67f9d1527022369ad2a62204efbb2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4e862e6e59323acfbc4e4fb13ffc17e217200389091e9e0cd739c4071acf0060\"" May 16 10:04:28.712257 containerd[1591]: time="2025-05-16T10:04:28.712229200Z" level=info msg="StartContainer for \"4e862e6e59323acfbc4e4fb13ffc17e217200389091e9e0cd739c4071acf0060\"" May 16 10:04:28.713133 containerd[1591]: time="2025-05-16T10:04:28.713111693Z" level=info msg="connecting to shim 4e862e6e59323acfbc4e4fb13ffc17e217200389091e9e0cd739c4071acf0060" address="unix:///run/containerd/s/43c08f3ff859b96a91e2723c9360e0d47f0a2b1d57f829d3d385f5ba0b8a4b3f" protocol=ttrpc version=3 May 16 10:04:28.740664 systemd[1]: Started cri-containerd-4e862e6e59323acfbc4e4fb13ffc17e217200389091e9e0cd739c4071acf0060.scope - libcontainer container 4e862e6e59323acfbc4e4fb13ffc17e217200389091e9e0cd739c4071acf0060. May 16 10:04:28.877739 containerd[1591]: time="2025-05-16T10:04:28.877690002Z" level=info msg="StartContainer for \"4e862e6e59323acfbc4e4fb13ffc17e217200389091e9e0cd739c4071acf0060\" returns successfully" May 16 10:04:28.880040 kubelet[2712]: E0516 10:04:28.880010 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:28.880704 kubelet[2712]: E0516 10:04:28.880010 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:31.353648 update_engine[1582]: I20250516 10:04:31.353569 1582 update_attempter.cc:509] Updating boot flags... May 16 10:04:31.738636 kubelet[2712]: I0516 10:04:31.738474 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-j7422" podStartSLOduration=5.119280666 podStartE2EDuration="6.738456338s" podCreationTimestamp="2025-05-16 10:04:25 +0000 UTC" firstStartedPulling="2025-05-16 10:04:27.073632791 +0000 UTC m=+7.495776704" lastFinishedPulling="2025-05-16 10:04:28.692808463 +0000 UTC m=+9.114952376" observedRunningTime="2025-05-16 10:04:29.891509864 +0000 UTC m=+10.313653787" watchObservedRunningTime="2025-05-16 10:04:31.738456338 +0000 UTC m=+12.160600251" May 16 10:04:31.747826 systemd[1]: Created slice kubepods-besteffort-podf5a9c45e_50f9_424b_a564_3fbce86c0ab2.slice - libcontainer container kubepods-besteffort-podf5a9c45e_50f9_424b_a564_3fbce86c0ab2.slice. May 16 10:04:31.797335 systemd[1]: Created slice kubepods-besteffort-pod57d462f3_4a3d_42b3_9610_fcce09d094b9.slice - libcontainer container kubepods-besteffort-pod57d462f3_4a3d_42b3_9610_fcce09d094b9.slice. May 16 10:04:31.844735 kubelet[2712]: I0516 10:04:31.844685 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-xtables-lock\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.844735 kubelet[2712]: I0516 10:04:31.844727 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/57d462f3-4a3d-42b3-9610-fcce09d094b9-node-certs\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.844735 kubelet[2712]: I0516 10:04:31.844744 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-bin-dir\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.844936 kubelet[2712]: I0516 10:04:31.844758 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-log-dir\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.844936 kubelet[2712]: I0516 10:04:31.844775 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f5a9c45e-50f9-424b-a564-3fbce86c0ab2-typha-certs\") pod \"calico-typha-58869cf6bb-gt9hh\" (UID: \"f5a9c45e-50f9-424b-a564-3fbce86c0ab2\") " pod="calico-system/calico-typha-58869cf6bb-gt9hh" May 16 10:04:31.844936 kubelet[2712]: I0516 10:04:31.844790 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-var-lib-calico\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.844936 kubelet[2712]: I0516 10:04:31.844814 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-lib-modules\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.844936 kubelet[2712]: I0516 10:04:31.844827 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-policysync\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.845102 kubelet[2712]: I0516 10:04:31.844841 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-var-run-calico\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.845102 kubelet[2712]: I0516 10:04:31.844858 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-net-dir\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.845102 kubelet[2712]: I0516 10:04:31.844873 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-flexvol-driver-host\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.845102 kubelet[2712]: I0516 10:04:31.844889 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rgzb\" (UniqueName: \"kubernetes.io/projected/57d462f3-4a3d-42b3-9610-fcce09d094b9-kube-api-access-5rgzb\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.845102 kubelet[2712]: I0516 10:04:31.844904 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57d462f3-4a3d-42b3-9610-fcce09d094b9-tigera-ca-bundle\") pod \"calico-node-cm57r\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " pod="calico-system/calico-node-cm57r" May 16 10:04:31.845238 kubelet[2712]: I0516 10:04:31.844919 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5a9c45e-50f9-424b-a564-3fbce86c0ab2-tigera-ca-bundle\") pod \"calico-typha-58869cf6bb-gt9hh\" (UID: \"f5a9c45e-50f9-424b-a564-3fbce86c0ab2\") " pod="calico-system/calico-typha-58869cf6bb-gt9hh" May 16 10:04:31.845238 kubelet[2712]: I0516 10:04:31.844934 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp2ts\" (UniqueName: \"kubernetes.io/projected/f5a9c45e-50f9-424b-a564-3fbce86c0ab2-kube-api-access-zp2ts\") pod \"calico-typha-58869cf6bb-gt9hh\" (UID: \"f5a9c45e-50f9-424b-a564-3fbce86c0ab2\") " pod="calico-system/calico-typha-58869cf6bb-gt9hh" May 16 10:04:31.902554 kubelet[2712]: E0516 10:04:31.902281 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:31.945887 kubelet[2712]: I0516 10:04:31.945826 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/32e26eae-39a6-476b-8739-ed86db555147-registration-dir\") pod \"csi-node-driver-7jrt7\" (UID: \"32e26eae-39a6-476b-8739-ed86db555147\") " pod="calico-system/csi-node-driver-7jrt7" May 16 10:04:31.945887 kubelet[2712]: I0516 10:04:31.945885 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32e26eae-39a6-476b-8739-ed86db555147-kubelet-dir\") pod \"csi-node-driver-7jrt7\" (UID: \"32e26eae-39a6-476b-8739-ed86db555147\") " pod="calico-system/csi-node-driver-7jrt7" May 16 10:04:31.945887 kubelet[2712]: I0516 10:04:31.945901 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/32e26eae-39a6-476b-8739-ed86db555147-socket-dir\") pod \"csi-node-driver-7jrt7\" (UID: \"32e26eae-39a6-476b-8739-ed86db555147\") " pod="calico-system/csi-node-driver-7jrt7" May 16 10:04:31.946098 kubelet[2712]: I0516 10:04:31.945977 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/32e26eae-39a6-476b-8739-ed86db555147-varrun\") pod \"csi-node-driver-7jrt7\" (UID: \"32e26eae-39a6-476b-8739-ed86db555147\") " pod="calico-system/csi-node-driver-7jrt7" May 16 10:04:31.946098 kubelet[2712]: I0516 10:04:31.946029 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrdq\" (UniqueName: \"kubernetes.io/projected/32e26eae-39a6-476b-8739-ed86db555147-kube-api-access-lcrdq\") pod \"csi-node-driver-7jrt7\" (UID: \"32e26eae-39a6-476b-8739-ed86db555147\") " pod="calico-system/csi-node-driver-7jrt7" May 16 10:04:31.947104 kubelet[2712]: E0516 10:04:31.947082 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.947104 kubelet[2712]: W0516 10:04:31.947102 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.947183 kubelet[2712]: E0516 10:04:31.947118 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.947413 kubelet[2712]: E0516 10:04:31.947393 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.947413 kubelet[2712]: W0516 10:04:31.947409 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.947486 kubelet[2712]: E0516 10:04:31.947418 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.947630 kubelet[2712]: E0516 10:04:31.947603 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.947669 kubelet[2712]: W0516 10:04:31.947640 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.947669 kubelet[2712]: E0516 10:04:31.947650 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.948058 kubelet[2712]: E0516 10:04:31.948036 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.948058 kubelet[2712]: W0516 10:04:31.948051 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.948160 kubelet[2712]: E0516 10:04:31.948071 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.948673 kubelet[2712]: E0516 10:04:31.948646 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.948673 kubelet[2712]: W0516 10:04:31.948663 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.948776 kubelet[2712]: E0516 10:04:31.948679 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.948943 kubelet[2712]: E0516 10:04:31.948893 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.948943 kubelet[2712]: W0516 10:04:31.948908 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.949078 kubelet[2712]: E0516 10:04:31.948952 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.949852 kubelet[2712]: E0516 10:04:31.949830 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.949852 kubelet[2712]: W0516 10:04:31.949845 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.949951 kubelet[2712]: E0516 10:04:31.949929 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.950082 kubelet[2712]: E0516 10:04:31.950064 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.950082 kubelet[2712]: W0516 10:04:31.950076 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.950139 kubelet[2712]: E0516 10:04:31.950118 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.950290 kubelet[2712]: E0516 10:04:31.950271 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.950290 kubelet[2712]: W0516 10:04:31.950285 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.950399 kubelet[2712]: E0516 10:04:31.950377 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.950571 kubelet[2712]: E0516 10:04:31.950550 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.950571 kubelet[2712]: W0516 10:04:31.950564 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.950637 kubelet[2712]: E0516 10:04:31.950601 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.950740 kubelet[2712]: E0516 10:04:31.950719 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.950740 kubelet[2712]: W0516 10:04:31.950735 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.950800 kubelet[2712]: E0516 10:04:31.950778 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.950933 kubelet[2712]: E0516 10:04:31.950908 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.950933 kubelet[2712]: W0516 10:04:31.950921 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.950988 kubelet[2712]: E0516 10:04:31.950964 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.951109 kubelet[2712]: E0516 10:04:31.951090 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.951109 kubelet[2712]: W0516 10:04:31.951102 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.951171 kubelet[2712]: E0516 10:04:31.951143 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.951321 kubelet[2712]: E0516 10:04:31.951302 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.951321 kubelet[2712]: W0516 10:04:31.951315 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.951375 kubelet[2712]: E0516 10:04:31.951361 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.951503 kubelet[2712]: E0516 10:04:31.951484 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.951503 kubelet[2712]: W0516 10:04:31.951496 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.951750 kubelet[2712]: E0516 10:04:31.951578 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.951750 kubelet[2712]: E0516 10:04:31.951699 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.951750 kubelet[2712]: W0516 10:04:31.951706 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.951831 kubelet[2712]: E0516 10:04:31.951807 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.951878 kubelet[2712]: E0516 10:04:31.951858 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.951878 kubelet[2712]: W0516 10:04:31.951870 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.951994 kubelet[2712]: E0516 10:04:31.951946 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.952097 kubelet[2712]: E0516 10:04:31.952079 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.952097 kubelet[2712]: W0516 10:04:31.952090 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.952163 kubelet[2712]: E0516 10:04:31.952137 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.952332 kubelet[2712]: E0516 10:04:31.952316 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.952332 kubelet[2712]: W0516 10:04:31.952328 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.952416 kubelet[2712]: E0516 10:04:31.952401 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.952605 kubelet[2712]: E0516 10:04:31.952590 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.952605 kubelet[2712]: W0516 10:04:31.952601 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.952670 kubelet[2712]: E0516 10:04:31.952626 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.953058 kubelet[2712]: E0516 10:04:31.953033 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.953058 kubelet[2712]: W0516 10:04:31.953052 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.953225 kubelet[2712]: E0516 10:04:31.953177 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.953258 kubelet[2712]: E0516 10:04:31.953248 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.953258 kubelet[2712]: W0516 10:04:31.953255 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.953345 kubelet[2712]: E0516 10:04:31.953328 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.953587 kubelet[2712]: E0516 10:04:31.953570 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.953587 kubelet[2712]: W0516 10:04:31.953581 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.953654 kubelet[2712]: E0516 10:04:31.953626 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.953811 kubelet[2712]: E0516 10:04:31.953796 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.953811 kubelet[2712]: W0516 10:04:31.953806 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.953863 kubelet[2712]: E0516 10:04:31.953849 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.954027 kubelet[2712]: E0516 10:04:31.954012 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.954027 kubelet[2712]: W0516 10:04:31.954023 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.954079 kubelet[2712]: E0516 10:04:31.954065 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.954266 kubelet[2712]: E0516 10:04:31.954249 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.954266 kubelet[2712]: W0516 10:04:31.954260 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.954312 kubelet[2712]: E0516 10:04:31.954279 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.954509 kubelet[2712]: E0516 10:04:31.954493 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.954509 kubelet[2712]: W0516 10:04:31.954505 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.954591 kubelet[2712]: E0516 10:04:31.954575 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.954771 kubelet[2712]: E0516 10:04:31.954744 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.954771 kubelet[2712]: W0516 10:04:31.954758 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.954830 kubelet[2712]: E0516 10:04:31.954777 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.954960 kubelet[2712]: E0516 10:04:31.954938 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.954960 kubelet[2712]: W0516 10:04:31.954951 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.955017 kubelet[2712]: E0516 10:04:31.954993 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.955142 kubelet[2712]: E0516 10:04:31.955122 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.955142 kubelet[2712]: W0516 10:04:31.955136 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.955234 kubelet[2712]: E0516 10:04:31.955192 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.955498 kubelet[2712]: E0516 10:04:31.955476 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.955498 kubelet[2712]: W0516 10:04:31.955486 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.955711 kubelet[2712]: E0516 10:04:31.955691 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.955886 kubelet[2712]: E0516 10:04:31.955862 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.955886 kubelet[2712]: W0516 10:04:31.955873 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.955992 kubelet[2712]: E0516 10:04:31.955976 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.956226 kubelet[2712]: E0516 10:04:31.956201 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.956226 kubelet[2712]: W0516 10:04:31.956212 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.956374 kubelet[2712]: E0516 10:04:31.956342 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.956626 kubelet[2712]: E0516 10:04:31.956607 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.956809 kubelet[2712]: W0516 10:04:31.956710 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.957214 kubelet[2712]: E0516 10:04:31.957195 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.957332 kubelet[2712]: E0516 10:04:31.957296 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.957402 kubelet[2712]: W0516 10:04:31.957387 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.957526 kubelet[2712]: E0516 10:04:31.957496 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.960796 kubelet[2712]: E0516 10:04:31.960696 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.960796 kubelet[2712]: W0516 10:04:31.960710 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.960933 kubelet[2712]: E0516 10:04:31.960922 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.965714 kubelet[2712]: E0516 10:04:31.965612 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.965714 kubelet[2712]: W0516 10:04:31.965630 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.965714 kubelet[2712]: E0516 10:04:31.965685 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.965873 kubelet[2712]: E0516 10:04:31.965857 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.965873 kubelet[2712]: W0516 10:04:31.965871 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.966135 kubelet[2712]: E0516 10:04:31.966042 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.966135 kubelet[2712]: W0516 10:04:31.966053 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.966348 kubelet[2712]: E0516 10:04:31.966333 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.966404 kubelet[2712]: W0516 10:04:31.966394 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.966961 kubelet[2712]: E0516 10:04:31.966946 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.967100 kubelet[2712]: E0516 10:04:31.967082 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.967202 kubelet[2712]: E0516 10:04:31.967183 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.967302 kubelet[2712]: E0516 10:04:31.967291 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.967420 kubelet[2712]: W0516 10:04:31.967409 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.968321 kubelet[2712]: E0516 10:04:31.968291 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.968424 kubelet[2712]: E0516 10:04:31.968414 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.968505 kubelet[2712]: W0516 10:04:31.968469 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.968577 kubelet[2712]: E0516 10:04:31.968566 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.968788 kubelet[2712]: E0516 10:04:31.968767 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.968788 kubelet[2712]: W0516 10:04:31.968776 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.968892 kubelet[2712]: E0516 10:04:31.968882 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.969143 kubelet[2712]: E0516 10:04:31.969123 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.969143 kubelet[2712]: W0516 10:04:31.969132 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.969315 kubelet[2712]: E0516 10:04:31.969303 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.969564 kubelet[2712]: E0516 10:04:31.969542 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.969564 kubelet[2712]: W0516 10:04:31.969552 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.969677 kubelet[2712]: E0516 10:04:31.969667 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.969899 kubelet[2712]: E0516 10:04:31.969877 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.969899 kubelet[2712]: W0516 10:04:31.969887 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.970092 kubelet[2712]: E0516 10:04:31.970070 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.970336 kubelet[2712]: E0516 10:04:31.970310 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.970336 kubelet[2712]: W0516 10:04:31.970323 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.970491 kubelet[2712]: E0516 10:04:31.970470 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.970718 kubelet[2712]: E0516 10:04:31.970707 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.970771 kubelet[2712]: W0516 10:04:31.970761 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.970891 kubelet[2712]: E0516 10:04:31.970876 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.971329 kubelet[2712]: E0516 10:04:31.971309 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.971394 kubelet[2712]: W0516 10:04:31.971377 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.971485 kubelet[2712]: E0516 10:04:31.971463 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.972130 kubelet[2712]: E0516 10:04:31.972111 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.972130 kubelet[2712]: W0516 10:04:31.972129 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.972214 kubelet[2712]: E0516 10:04:31.972145 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.974529 kubelet[2712]: E0516 10:04:31.974493 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.974529 kubelet[2712]: W0516 10:04:31.974504 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.974616 kubelet[2712]: E0516 10:04:31.974605 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:31.976380 kubelet[2712]: E0516 10:04:31.976361 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:31.976380 kubelet[2712]: W0516 10:04:31.976376 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:31.976434 kubelet[2712]: E0516 10:04:31.976386 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.047139 kubelet[2712]: E0516 10:04:32.047103 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.047139 kubelet[2712]: W0516 10:04:32.047125 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.047139 kubelet[2712]: E0516 10:04:32.047144 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.047386 kubelet[2712]: E0516 10:04:32.047364 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.047386 kubelet[2712]: W0516 10:04:32.047373 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.047386 kubelet[2712]: E0516 10:04:32.047387 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.047618 kubelet[2712]: E0516 10:04:32.047600 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.047618 kubelet[2712]: W0516 10:04:32.047609 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.047670 kubelet[2712]: E0516 10:04:32.047625 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.047848 kubelet[2712]: E0516 10:04:32.047834 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.047848 kubelet[2712]: W0516 10:04:32.047843 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.047897 kubelet[2712]: E0516 10:04:32.047855 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.048117 kubelet[2712]: E0516 10:04:32.048088 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.048151 kubelet[2712]: W0516 10:04:32.048114 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.048151 kubelet[2712]: E0516 10:04:32.048141 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.048343 kubelet[2712]: E0516 10:04:32.048325 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.048343 kubelet[2712]: W0516 10:04:32.048337 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.048395 kubelet[2712]: E0516 10:04:32.048348 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.048508 kubelet[2712]: E0516 10:04:32.048492 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.048508 kubelet[2712]: W0516 10:04:32.048501 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.048569 kubelet[2712]: E0516 10:04:32.048511 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.048767 kubelet[2712]: E0516 10:04:32.048747 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.048767 kubelet[2712]: W0516 10:04:32.048763 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.048817 kubelet[2712]: E0516 10:04:32.048781 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.048994 kubelet[2712]: E0516 10:04:32.048974 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.048994 kubelet[2712]: W0516 10:04:32.048988 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.049040 kubelet[2712]: E0516 10:04:32.049017 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.049177 kubelet[2712]: E0516 10:04:32.049159 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.049177 kubelet[2712]: W0516 10:04:32.049172 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.049231 kubelet[2712]: E0516 10:04:32.049206 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.049365 kubelet[2712]: E0516 10:04:32.049347 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.049365 kubelet[2712]: W0516 10:04:32.049358 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.049414 kubelet[2712]: E0516 10:04:32.049386 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.049562 kubelet[2712]: E0516 10:04:32.049546 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.049562 kubelet[2712]: W0516 10:04:32.049557 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.049617 kubelet[2712]: E0516 10:04:32.049586 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.049783 kubelet[2712]: E0516 10:04:32.049767 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.049783 kubelet[2712]: W0516 10:04:32.049780 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.049830 kubelet[2712]: E0516 10:04:32.049795 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.050004 kubelet[2712]: E0516 10:04:32.049988 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.050004 kubelet[2712]: W0516 10:04:32.050001 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.050062 kubelet[2712]: E0516 10:04:32.050017 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.050254 kubelet[2712]: E0516 10:04:32.050237 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.050254 kubelet[2712]: W0516 10:04:32.050251 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.050299 kubelet[2712]: E0516 10:04:32.050267 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.050463 kubelet[2712]: E0516 10:04:32.050449 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.050463 kubelet[2712]: W0516 10:04:32.050461 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.050506 kubelet[2712]: E0516 10:04:32.050476 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.050643 kubelet[2712]: E0516 10:04:32.050632 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.050643 kubelet[2712]: W0516 10:04:32.050640 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.050707 kubelet[2712]: E0516 10:04:32.050673 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.050803 kubelet[2712]: E0516 10:04:32.050793 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.050803 kubelet[2712]: W0516 10:04:32.050801 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.050866 kubelet[2712]: E0516 10:04:32.050846 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.050965 kubelet[2712]: E0516 10:04:32.050954 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.050965 kubelet[2712]: W0516 10:04:32.050962 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.051026 kubelet[2712]: E0516 10:04:32.051006 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.051181 kubelet[2712]: E0516 10:04:32.051165 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.051181 kubelet[2712]: W0516 10:04:32.051178 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.051236 kubelet[2712]: E0516 10:04:32.051191 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.051387 kubelet[2712]: E0516 10:04:32.051372 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.051387 kubelet[2712]: W0516 10:04:32.051383 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.051430 kubelet[2712]: E0516 10:04:32.051398 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.051575 kubelet[2712]: E0516 10:04:32.051561 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.051575 kubelet[2712]: W0516 10:04:32.051570 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.051629 kubelet[2712]: E0516 10:04:32.051583 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.051772 kubelet[2712]: E0516 10:04:32.051758 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.051772 kubelet[2712]: W0516 10:04:32.051769 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.051820 kubelet[2712]: E0516 10:04:32.051783 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.051962 kubelet[2712]: E0516 10:04:32.051952 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.051962 kubelet[2712]: W0516 10:04:32.051960 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.052006 kubelet[2712]: E0516 10:04:32.051972 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.052150 kubelet[2712]: E0516 10:04:32.052135 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.052150 kubelet[2712]: W0516 10:04:32.052148 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.052195 kubelet[2712]: E0516 10:04:32.052157 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.052473 kubelet[2712]: E0516 10:04:32.052457 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:32.053090 containerd[1591]: time="2025-05-16T10:04:32.053055781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58869cf6bb-gt9hh,Uid:f5a9c45e-50f9-424b-a564-3fbce86c0ab2,Namespace:calico-system,Attempt:0,}" May 16 10:04:32.062638 kubelet[2712]: E0516 10:04:32.062575 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:32.062638 kubelet[2712]: W0516 10:04:32.062591 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:32.062638 kubelet[2712]: E0516 10:04:32.062606 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:32.075252 containerd[1591]: time="2025-05-16T10:04:32.075197253Z" level=info msg="connecting to shim a9d5c577730467aaae8d8dadb01a3139de40daa149276f40fad45ab2bd573530" address="unix:///run/containerd/s/dcf47f874a45a2598afde8a56774566309e0dcabb5495c0808f71c3e569dcaf8" namespace=k8s.io protocol=ttrpc version=3 May 16 10:04:32.100897 kubelet[2712]: E0516 10:04:32.100709 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:32.101305 containerd[1591]: time="2025-05-16T10:04:32.101262482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cm57r,Uid:57d462f3-4a3d-42b3-9610-fcce09d094b9,Namespace:calico-system,Attempt:0,}" May 16 10:04:32.115668 systemd[1]: Started cri-containerd-a9d5c577730467aaae8d8dadb01a3139de40daa149276f40fad45ab2bd573530.scope - libcontainer container a9d5c577730467aaae8d8dadb01a3139de40daa149276f40fad45ab2bd573530. May 16 10:04:32.273689 containerd[1591]: time="2025-05-16T10:04:32.273640984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58869cf6bb-gt9hh,Uid:f5a9c45e-50f9-424b-a564-3fbce86c0ab2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9d5c577730467aaae8d8dadb01a3139de40daa149276f40fad45ab2bd573530\"" May 16 10:04:32.274292 kubelet[2712]: E0516 10:04:32.274268 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:32.275247 containerd[1591]: time="2025-05-16T10:04:32.275209146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 16 10:04:32.298939 containerd[1591]: time="2025-05-16T10:04:32.298823829Z" level=info msg="connecting to shim c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85" address="unix:///run/containerd/s/de0a19da887bcded7dee23c148b32c1349f51492d48b029a54bbe25a05229872" namespace=k8s.io protocol=ttrpc version=3 May 16 10:04:32.326656 systemd[1]: Started cri-containerd-c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85.scope - libcontainer container c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85. May 16 10:04:32.357711 containerd[1591]: time="2025-05-16T10:04:32.357669422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cm57r,Uid:57d462f3-4a3d-42b3-9610-fcce09d094b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\"" May 16 10:04:32.358452 kubelet[2712]: E0516 10:04:32.358422 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:33.776669 kubelet[2712]: E0516 10:04:33.776628 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:34.464909 kubelet[2712]: E0516 10:04:34.464820 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:34.555317 kubelet[2712]: E0516 10:04:34.555271 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:34.555317 kubelet[2712]: W0516 10:04:34.555298 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:34.555317 kubelet[2712]: E0516 10:04:34.555322 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:34.555613 kubelet[2712]: E0516 10:04:34.555591 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:34.555613 kubelet[2712]: W0516 10:04:34.555609 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:34.555695 kubelet[2712]: E0516 10:04:34.555623 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:34.555883 kubelet[2712]: E0516 10:04:34.555868 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:34.555928 kubelet[2712]: W0516 10:04:34.555892 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:34.555928 kubelet[2712]: E0516 10:04:34.555904 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:34.556156 kubelet[2712]: E0516 10:04:34.556137 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:34.556156 kubelet[2712]: W0516 10:04:34.556150 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:34.556237 kubelet[2712]: E0516 10:04:34.556162 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:34.556385 kubelet[2712]: E0516 10:04:34.556370 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:34.556385 kubelet[2712]: W0516 10:04:34.556382 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:34.556481 kubelet[2712]: E0516 10:04:34.556392 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:34.891941 kubelet[2712]: E0516 10:04:34.891894 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:34.959108 kubelet[2712]: E0516 10:04:34.959052 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:34.959108 kubelet[2712]: W0516 10:04:34.959073 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:34.959108 kubelet[2712]: E0516 10:04:34.959092 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:34.959330 kubelet[2712]: E0516 10:04:34.959252 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:34.959330 kubelet[2712]: W0516 10:04:34.959262 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:34.959330 kubelet[2712]: E0516 10:04:34.959273 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:34.959435 kubelet[2712]: E0516 10:04:34.959413 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:34.959435 kubelet[2712]: W0516 10:04:34.959422 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:34.959435 kubelet[2712]: E0516 10:04:34.959431 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:34.959627 kubelet[2712]: E0516 10:04:34.959594 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:34.959627 kubelet[2712]: W0516 10:04:34.959617 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:34.959627 kubelet[2712]: E0516 10:04:34.959627 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:34.959815 kubelet[2712]: E0516 10:04:34.959796 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:34.959815 kubelet[2712]: W0516 10:04:34.959809 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:34.959899 kubelet[2712]: E0516 10:04:34.959822 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:35.775804 kubelet[2712]: E0516 10:04:35.775766 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:36.066582 containerd[1591]: time="2025-05-16T10:04:36.066159705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:36.067485 containerd[1591]: time="2025-05-16T10:04:36.067162183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 16 10:04:36.068346 containerd[1591]: time="2025-05-16T10:04:36.068300677Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:36.070935 containerd[1591]: time="2025-05-16T10:04:36.070877210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:36.071455 containerd[1591]: time="2025-05-16T10:04:36.071414407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.796164966s" May 16 10:04:36.071554 containerd[1591]: time="2025-05-16T10:04:36.071458580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 16 10:04:36.073249 containerd[1591]: time="2025-05-16T10:04:36.073012599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 16 10:04:36.085476 containerd[1591]: time="2025-05-16T10:04:36.085397057Z" level=info msg="CreateContainer within sandbox \"a9d5c577730467aaae8d8dadb01a3139de40daa149276f40fad45ab2bd573530\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 10:04:36.120557 containerd[1591]: time="2025-05-16T10:04:36.120078692Z" level=info msg="Container 7bece9f5134c928d285b1bae01bd96d9a0a1fe52968b22247985bf93bf887e92: CDI devices from CRI Config.CDIDevices: []" May 16 10:04:36.130046 containerd[1591]: time="2025-05-16T10:04:36.129955687Z" level=info msg="CreateContainer within sandbox \"a9d5c577730467aaae8d8dadb01a3139de40daa149276f40fad45ab2bd573530\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7bece9f5134c928d285b1bae01bd96d9a0a1fe52968b22247985bf93bf887e92\"" May 16 10:04:36.132178 containerd[1591]: time="2025-05-16T10:04:36.131501907Z" level=info msg="StartContainer for \"7bece9f5134c928d285b1bae01bd96d9a0a1fe52968b22247985bf93bf887e92\"" May 16 10:04:36.132731 containerd[1591]: time="2025-05-16T10:04:36.132706975Z" level=info msg="connecting to shim 7bece9f5134c928d285b1bae01bd96d9a0a1fe52968b22247985bf93bf887e92" address="unix:///run/containerd/s/dcf47f874a45a2598afde8a56774566309e0dcabb5495c0808f71c3e569dcaf8" protocol=ttrpc version=3 May 16 10:04:36.159781 systemd[1]: Started cri-containerd-7bece9f5134c928d285b1bae01bd96d9a0a1fe52968b22247985bf93bf887e92.scope - libcontainer container 7bece9f5134c928d285b1bae01bd96d9a0a1fe52968b22247985bf93bf887e92. May 16 10:04:36.283623 containerd[1591]: time="2025-05-16T10:04:36.283576452Z" level=info msg="StartContainer for \"7bece9f5134c928d285b1bae01bd96d9a0a1fe52968b22247985bf93bf887e92\" returns successfully" May 16 10:04:36.897489 kubelet[2712]: E0516 10:04:36.897149 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:36.973286 kubelet[2712]: E0516 10:04:36.973245 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.973286 kubelet[2712]: W0516 10:04:36.973270 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.973286 kubelet[2712]: E0516 10:04:36.973292 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.973566 kubelet[2712]: E0516 10:04:36.973505 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.973566 kubelet[2712]: W0516 10:04:36.973537 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.973566 kubelet[2712]: E0516 10:04:36.973546 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.973742 kubelet[2712]: E0516 10:04:36.973714 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.973742 kubelet[2712]: W0516 10:04:36.973725 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.973742 kubelet[2712]: E0516 10:04:36.973732 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.973920 kubelet[2712]: E0516 10:04:36.973899 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.973920 kubelet[2712]: W0516 10:04:36.973909 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.973920 kubelet[2712]: E0516 10:04:36.973917 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.974070 kubelet[2712]: E0516 10:04:36.974050 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.974070 kubelet[2712]: W0516 10:04:36.974059 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.974070 kubelet[2712]: E0516 10:04:36.974067 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.974208 kubelet[2712]: E0516 10:04:36.974188 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.974208 kubelet[2712]: W0516 10:04:36.974198 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.974208 kubelet[2712]: E0516 10:04:36.974209 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.974366 kubelet[2712]: E0516 10:04:36.974347 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.974366 kubelet[2712]: W0516 10:04:36.974356 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.974366 kubelet[2712]: E0516 10:04:36.974363 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.974531 kubelet[2712]: E0516 10:04:36.974494 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.974531 kubelet[2712]: W0516 10:04:36.974503 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.974531 kubelet[2712]: E0516 10:04:36.974510 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.974689 kubelet[2712]: E0516 10:04:36.974670 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.974689 kubelet[2712]: W0516 10:04:36.974680 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.974689 kubelet[2712]: E0516 10:04:36.974687 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.974835 kubelet[2712]: E0516 10:04:36.974814 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.974835 kubelet[2712]: W0516 10:04:36.974826 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.974835 kubelet[2712]: E0516 10:04:36.974835 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.974990 kubelet[2712]: E0516 10:04:36.974971 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.974990 kubelet[2712]: W0516 10:04:36.974981 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.974990 kubelet[2712]: E0516 10:04:36.974988 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.975130 kubelet[2712]: E0516 10:04:36.975111 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.975130 kubelet[2712]: W0516 10:04:36.975120 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.975130 kubelet[2712]: E0516 10:04:36.975127 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.975276 kubelet[2712]: E0516 10:04:36.975257 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.975276 kubelet[2712]: W0516 10:04:36.975266 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.975276 kubelet[2712]: E0516 10:04:36.975273 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.975411 kubelet[2712]: E0516 10:04:36.975393 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.975411 kubelet[2712]: W0516 10:04:36.975402 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.975411 kubelet[2712]: E0516 10:04:36.975408 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.975573 kubelet[2712]: E0516 10:04:36.975554 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.975573 kubelet[2712]: W0516 10:04:36.975565 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.975573 kubelet[2712]: E0516 10:04:36.975573 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.982497 kubelet[2712]: E0516 10:04:36.982467 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.982497 kubelet[2712]: W0516 10:04:36.982490 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.982675 kubelet[2712]: E0516 10:04:36.982511 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.982777 kubelet[2712]: E0516 10:04:36.982754 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.982777 kubelet[2712]: W0516 10:04:36.982768 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.982840 kubelet[2712]: E0516 10:04:36.982785 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.983039 kubelet[2712]: E0516 10:04:36.983009 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.983039 kubelet[2712]: W0516 10:04:36.983024 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.983039 kubelet[2712]: E0516 10:04:36.983042 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.983253 kubelet[2712]: E0516 10:04:36.983236 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.983253 kubelet[2712]: W0516 10:04:36.983246 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.983338 kubelet[2712]: E0516 10:04:36.983257 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.983432 kubelet[2712]: E0516 10:04:36.983412 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.983432 kubelet[2712]: W0516 10:04:36.983425 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.983542 kubelet[2712]: E0516 10:04:36.983440 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.983695 kubelet[2712]: E0516 10:04:36.983677 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.983695 kubelet[2712]: W0516 10:04:36.983688 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.983784 kubelet[2712]: E0516 10:04:36.983699 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.983884 kubelet[2712]: E0516 10:04:36.983864 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.983884 kubelet[2712]: W0516 10:04:36.983878 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.983958 kubelet[2712]: E0516 10:04:36.983891 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.984075 kubelet[2712]: E0516 10:04:36.984055 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.984075 kubelet[2712]: W0516 10:04:36.984069 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.984150 kubelet[2712]: E0516 10:04:36.984099 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.984248 kubelet[2712]: E0516 10:04:36.984233 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.984248 kubelet[2712]: W0516 10:04:36.984244 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.984297 kubelet[2712]: E0516 10:04:36.984269 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.984413 kubelet[2712]: E0516 10:04:36.984399 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.984413 kubelet[2712]: W0516 10:04:36.984409 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.984479 kubelet[2712]: E0516 10:04:36.984422 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.984642 kubelet[2712]: E0516 10:04:36.984624 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.984642 kubelet[2712]: W0516 10:04:36.984635 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.984700 kubelet[2712]: E0516 10:04:36.984650 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.984851 kubelet[2712]: E0516 10:04:36.984841 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.984851 kubelet[2712]: W0516 10:04:36.984849 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.984904 kubelet[2712]: E0516 10:04:36.984861 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.985136 kubelet[2712]: E0516 10:04:36.985121 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.985136 kubelet[2712]: W0516 10:04:36.985132 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.985188 kubelet[2712]: E0516 10:04:36.985145 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.985332 kubelet[2712]: E0516 10:04:36.985319 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.985332 kubelet[2712]: W0516 10:04:36.985329 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.985384 kubelet[2712]: E0516 10:04:36.985342 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.985565 kubelet[2712]: E0516 10:04:36.985535 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.985565 kubelet[2712]: W0516 10:04:36.985545 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.985565 kubelet[2712]: E0516 10:04:36.985558 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.985777 kubelet[2712]: E0516 10:04:36.985765 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.985777 kubelet[2712]: W0516 10:04:36.985776 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.985827 kubelet[2712]: E0516 10:04:36.985788 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.985964 kubelet[2712]: E0516 10:04:36.985953 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.985964 kubelet[2712]: W0516 10:04:36.985961 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.986012 kubelet[2712]: E0516 10:04:36.985969 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:36.988197 kubelet[2712]: E0516 10:04:36.988175 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:36.988197 kubelet[2712]: W0516 10:04:36.988189 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:36.988250 kubelet[2712]: E0516 10:04:36.988199 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.015929 kubelet[2712]: I0516 10:04:37.015836 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58869cf6bb-gt9hh" podStartSLOduration=2.217985084 podStartE2EDuration="6.015819251s" podCreationTimestamp="2025-05-16 10:04:31 +0000 UTC" firstStartedPulling="2025-05-16 10:04:32.274979981 +0000 UTC m=+12.697123894" lastFinishedPulling="2025-05-16 10:04:36.072814158 +0000 UTC m=+16.494958061" observedRunningTime="2025-05-16 10:04:37.015275239 +0000 UTC m=+17.437419172" watchObservedRunningTime="2025-05-16 10:04:37.015819251 +0000 UTC m=+17.437963164" May 16 10:04:37.776472 kubelet[2712]: E0516 10:04:37.776397 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:37.897961 kubelet[2712]: E0516 10:04:37.897924 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:37.982067 kubelet[2712]: E0516 10:04:37.982020 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.982067 kubelet[2712]: W0516 10:04:37.982054 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.982229 kubelet[2712]: E0516 10:04:37.982084 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.982352 kubelet[2712]: E0516 10:04:37.982334 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.982352 kubelet[2712]: W0516 10:04:37.982349 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.982404 kubelet[2712]: E0516 10:04:37.982361 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.982568 kubelet[2712]: E0516 10:04:37.982549 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.982568 kubelet[2712]: W0516 10:04:37.982562 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.982628 kubelet[2712]: E0516 10:04:37.982572 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.982785 kubelet[2712]: E0516 10:04:37.982768 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.982785 kubelet[2712]: W0516 10:04:37.982780 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.982837 kubelet[2712]: E0516 10:04:37.982791 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.983056 kubelet[2712]: E0516 10:04:37.983031 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.983094 kubelet[2712]: W0516 10:04:37.983055 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.983094 kubelet[2712]: E0516 10:04:37.983080 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.983240 kubelet[2712]: E0516 10:04:37.983229 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.983240 kubelet[2712]: W0516 10:04:37.983237 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.983290 kubelet[2712]: E0516 10:04:37.983245 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.983455 kubelet[2712]: E0516 10:04:37.983438 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.983455 kubelet[2712]: W0516 10:04:37.983447 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.983502 kubelet[2712]: E0516 10:04:37.983454 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.983675 kubelet[2712]: E0516 10:04:37.983657 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.983675 kubelet[2712]: W0516 10:04:37.983673 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.983730 kubelet[2712]: E0516 10:04:37.983686 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.983921 kubelet[2712]: E0516 10:04:37.983904 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.983921 kubelet[2712]: W0516 10:04:37.983918 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.983982 kubelet[2712]: E0516 10:04:37.983930 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.984172 kubelet[2712]: E0516 10:04:37.984155 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.984172 kubelet[2712]: W0516 10:04:37.984169 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.984233 kubelet[2712]: E0516 10:04:37.984180 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.984407 kubelet[2712]: E0516 10:04:37.984392 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.984446 kubelet[2712]: W0516 10:04:37.984406 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.984446 kubelet[2712]: E0516 10:04:37.984417 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.984642 kubelet[2712]: E0516 10:04:37.984625 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.984642 kubelet[2712]: W0516 10:04:37.984639 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.984709 kubelet[2712]: E0516 10:04:37.984650 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.984869 kubelet[2712]: E0516 10:04:37.984842 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.984869 kubelet[2712]: W0516 10:04:37.984865 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.984930 kubelet[2712]: E0516 10:04:37.984876 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.985119 kubelet[2712]: E0516 10:04:37.985095 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.985119 kubelet[2712]: W0516 10:04:37.985108 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.985119 kubelet[2712]: E0516 10:04:37.985117 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.985318 kubelet[2712]: E0516 10:04:37.985301 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.985318 kubelet[2712]: W0516 10:04:37.985314 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.985373 kubelet[2712]: E0516 10:04:37.985325 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.990590 kubelet[2712]: E0516 10:04:37.990570 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.990590 kubelet[2712]: W0516 10:04:37.990583 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.990654 kubelet[2712]: E0516 10:04:37.990594 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.990782 kubelet[2712]: E0516 10:04:37.990766 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.990782 kubelet[2712]: W0516 10:04:37.990775 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.990830 kubelet[2712]: E0516 10:04:37.990788 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.990963 kubelet[2712]: E0516 10:04:37.990950 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.990963 kubelet[2712]: W0516 10:04:37.990961 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.991013 kubelet[2712]: E0516 10:04:37.990971 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.991208 kubelet[2712]: E0516 10:04:37.991184 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.991208 kubelet[2712]: W0516 10:04:37.991201 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.991256 kubelet[2712]: E0516 10:04:37.991218 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.991457 kubelet[2712]: E0516 10:04:37.991434 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.991457 kubelet[2712]: W0516 10:04:37.991452 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.991506 kubelet[2712]: E0516 10:04:37.991472 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.991692 kubelet[2712]: E0516 10:04:37.991679 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.991723 kubelet[2712]: W0516 10:04:37.991691 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.991723 kubelet[2712]: E0516 10:04:37.991707 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.991899 kubelet[2712]: E0516 10:04:37.991886 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.991921 kubelet[2712]: W0516 10:04:37.991898 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.991921 kubelet[2712]: E0516 10:04:37.991914 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.992130 kubelet[2712]: E0516 10:04:37.992112 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.992130 kubelet[2712]: W0516 10:04:37.992127 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.992185 kubelet[2712]: E0516 10:04:37.992144 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.992368 kubelet[2712]: E0516 10:04:37.992356 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.992392 kubelet[2712]: W0516 10:04:37.992368 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.992413 kubelet[2712]: E0516 10:04:37.992387 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.992618 kubelet[2712]: E0516 10:04:37.992606 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.992641 kubelet[2712]: W0516 10:04:37.992618 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.992641 kubelet[2712]: E0516 10:04:37.992634 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.992840 kubelet[2712]: E0516 10:04:37.992829 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.992879 kubelet[2712]: W0516 10:04:37.992840 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.992879 kubelet[2712]: E0516 10:04:37.992864 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.993121 kubelet[2712]: E0516 10:04:37.993110 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.993121 kubelet[2712]: W0516 10:04:37.993119 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.993169 kubelet[2712]: E0516 10:04:37.993133 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.993372 kubelet[2712]: E0516 10:04:37.993354 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.993402 kubelet[2712]: W0516 10:04:37.993370 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.993426 kubelet[2712]: E0516 10:04:37.993406 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.993650 kubelet[2712]: E0516 10:04:37.993633 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.993650 kubelet[2712]: W0516 10:04:37.993647 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.993711 kubelet[2712]: E0516 10:04:37.993673 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.993850 kubelet[2712]: E0516 10:04:37.993834 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.993891 kubelet[2712]: W0516 10:04:37.993849 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.993891 kubelet[2712]: E0516 10:04:37.993879 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.994096 kubelet[2712]: E0516 10:04:37.994079 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.994096 kubelet[2712]: W0516 10:04:37.994092 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.994148 kubelet[2712]: E0516 10:04:37.994103 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.994379 kubelet[2712]: E0516 10:04:37.994356 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.994379 kubelet[2712]: W0516 10:04:37.994370 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.994424 kubelet[2712]: E0516 10:04:37.994381 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:37.994780 kubelet[2712]: E0516 10:04:37.994764 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:37.994780 kubelet[2712]: W0516 10:04:37.994777 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:37.994837 kubelet[2712]: E0516 10:04:37.994788 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.899465 kubelet[2712]: E0516 10:04:38.899425 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:38.991372 kubelet[2712]: E0516 10:04:38.991323 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.991372 kubelet[2712]: W0516 10:04:38.991349 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.991372 kubelet[2712]: E0516 10:04:38.991375 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.991597 kubelet[2712]: E0516 10:04:38.991578 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.991597 kubelet[2712]: W0516 10:04:38.991590 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.991711 kubelet[2712]: E0516 10:04:38.991600 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.991782 kubelet[2712]: E0516 10:04:38.991765 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.991782 kubelet[2712]: W0516 10:04:38.991775 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.991837 kubelet[2712]: E0516 10:04:38.991783 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.991944 kubelet[2712]: E0516 10:04:38.991927 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.991944 kubelet[2712]: W0516 10:04:38.991938 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.992006 kubelet[2712]: E0516 10:04:38.991946 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.992139 kubelet[2712]: E0516 10:04:38.992124 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.992139 kubelet[2712]: W0516 10:04:38.992134 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.992204 kubelet[2712]: E0516 10:04:38.992143 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.992304 kubelet[2712]: E0516 10:04:38.992287 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.992304 kubelet[2712]: W0516 10:04:38.992297 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.992376 kubelet[2712]: E0516 10:04:38.992305 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.992457 kubelet[2712]: E0516 10:04:38.992441 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.992457 kubelet[2712]: W0516 10:04:38.992450 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.992533 kubelet[2712]: E0516 10:04:38.992458 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.992631 kubelet[2712]: E0516 10:04:38.992615 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.992631 kubelet[2712]: W0516 10:04:38.992625 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.992706 kubelet[2712]: E0516 10:04:38.992634 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.992797 kubelet[2712]: E0516 10:04:38.992781 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.992797 kubelet[2712]: W0516 10:04:38.992791 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.992857 kubelet[2712]: E0516 10:04:38.992800 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.992957 kubelet[2712]: E0516 10:04:38.992942 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.992957 kubelet[2712]: W0516 10:04:38.992952 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.993023 kubelet[2712]: E0516 10:04:38.992960 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.993119 kubelet[2712]: E0516 10:04:38.993103 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.993119 kubelet[2712]: W0516 10:04:38.993113 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.993183 kubelet[2712]: E0516 10:04:38.993122 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.993288 kubelet[2712]: E0516 10:04:38.993272 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.993288 kubelet[2712]: W0516 10:04:38.993282 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.993361 kubelet[2712]: E0516 10:04:38.993291 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.993464 kubelet[2712]: E0516 10:04:38.993449 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.993464 kubelet[2712]: W0516 10:04:38.993458 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.993544 kubelet[2712]: E0516 10:04:38.993466 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.993633 kubelet[2712]: E0516 10:04:38.993620 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.993633 kubelet[2712]: W0516 10:04:38.993630 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.993633 kubelet[2712]: E0516 10:04:38.993638 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.993899 kubelet[2712]: E0516 10:04:38.993875 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.993899 kubelet[2712]: W0516 10:04:38.993886 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.993956 kubelet[2712]: E0516 10:04:38.993914 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.998058 kubelet[2712]: E0516 10:04:38.998035 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.998058 kubelet[2712]: W0516 10:04:38.998046 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.998058 kubelet[2712]: E0516 10:04:38.998055 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.998297 kubelet[2712]: E0516 10:04:38.998275 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.998297 kubelet[2712]: W0516 10:04:38.998285 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.998297 kubelet[2712]: E0516 10:04:38.998298 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.998506 kubelet[2712]: E0516 10:04:38.998485 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.998506 kubelet[2712]: W0516 10:04:38.998498 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.998592 kubelet[2712]: E0516 10:04:38.998524 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.998742 kubelet[2712]: E0516 10:04:38.998721 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.998742 kubelet[2712]: W0516 10:04:38.998732 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.998802 kubelet[2712]: E0516 10:04:38.998746 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.999033 kubelet[2712]: E0516 10:04:38.998990 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.999139 kubelet[2712]: W0516 10:04:38.999028 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.999139 kubelet[2712]: E0516 10:04:38.999069 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.999427 kubelet[2712]: E0516 10:04:38.999407 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.999427 kubelet[2712]: W0516 10:04:38.999423 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.999498 kubelet[2712]: E0516 10:04:38.999442 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.999671 kubelet[2712]: E0516 10:04:38.999657 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.999671 kubelet[2712]: W0516 10:04:38.999667 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.999739 kubelet[2712]: E0516 10:04:38.999679 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:38.999857 kubelet[2712]: E0516 10:04:38.999843 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:38.999857 kubelet[2712]: W0516 10:04:38.999855 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:38.999922 kubelet[2712]: E0516 10:04:38.999871 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.000044 kubelet[2712]: E0516 10:04:39.000031 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:39.000044 kubelet[2712]: W0516 10:04:39.000040 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:39.000109 kubelet[2712]: E0516 10:04:39.000052 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.000208 kubelet[2712]: E0516 10:04:39.000196 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:39.000208 kubelet[2712]: W0516 10:04:39.000204 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:39.000280 kubelet[2712]: E0516 10:04:39.000215 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.000410 kubelet[2712]: E0516 10:04:39.000395 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:39.000410 kubelet[2712]: W0516 10:04:39.000406 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:39.000478 kubelet[2712]: E0516 10:04:39.000421 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.000683 kubelet[2712]: E0516 10:04:39.000662 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:39.000683 kubelet[2712]: W0516 10:04:39.000678 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:39.000750 kubelet[2712]: E0516 10:04:39.000696 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.000926 kubelet[2712]: E0516 10:04:39.000907 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:39.000926 kubelet[2712]: W0516 10:04:39.000922 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:39.000997 kubelet[2712]: E0516 10:04:39.000940 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.001157 kubelet[2712]: E0516 10:04:39.001138 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:39.001157 kubelet[2712]: W0516 10:04:39.001153 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:39.001223 kubelet[2712]: E0516 10:04:39.001170 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.001403 kubelet[2712]: E0516 10:04:39.001385 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:39.001403 kubelet[2712]: W0516 10:04:39.001399 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:39.001477 kubelet[2712]: E0516 10:04:39.001417 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.001609 kubelet[2712]: E0516 10:04:39.001593 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:39.001609 kubelet[2712]: W0516 10:04:39.001606 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:39.001669 kubelet[2712]: E0516 10:04:39.001617 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.001813 kubelet[2712]: E0516 10:04:39.001794 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:39.001813 kubelet[2712]: W0516 10:04:39.001809 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:39.001882 kubelet[2712]: E0516 10:04:39.001829 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.002061 kubelet[2712]: E0516 10:04:39.002042 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 10:04:39.002061 kubelet[2712]: W0516 10:04:39.002056 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 10:04:39.002127 kubelet[2712]: E0516 10:04:39.002067 2712 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 10:04:39.777479 kubelet[2712]: E0516 10:04:39.777109 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:41.776507 kubelet[2712]: E0516 10:04:41.776466 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:43.105177 containerd[1591]: time="2025-05-16T10:04:43.105126996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:43.106050 containerd[1591]: time="2025-05-16T10:04:43.106010815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 16 10:04:43.107315 containerd[1591]: time="2025-05-16T10:04:43.107277065Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:43.110034 containerd[1591]: time="2025-05-16T10:04:43.109985034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:43.110810 containerd[1591]: time="2025-05-16T10:04:43.110780057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 7.037733861s" May 16 10:04:43.110810 containerd[1591]: time="2025-05-16T10:04:43.110808759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 16 10:04:43.112470 containerd[1591]: time="2025-05-16T10:04:43.112432057Z" level=info msg="CreateContainer within sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 10:04:43.122053 containerd[1591]: time="2025-05-16T10:04:43.122015350Z" level=info msg="Container de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def: CDI devices from CRI Config.CDIDevices: []" May 16 10:04:43.131051 containerd[1591]: time="2025-05-16T10:04:43.131010075Z" level=info msg="CreateContainer within sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def\"" May 16 10:04:43.131485 containerd[1591]: time="2025-05-16T10:04:43.131445726Z" level=info msg="StartContainer for \"de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def\"" May 16 10:04:43.132961 containerd[1591]: time="2025-05-16T10:04:43.132930197Z" level=info msg="connecting to shim de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def" address="unix:///run/containerd/s/de0a19da887bcded7dee23c148b32c1349f51492d48b029a54bbe25a05229872" protocol=ttrpc version=3 May 16 10:04:43.162769 systemd[1]: Started cri-containerd-de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def.scope - libcontainer container de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def. May 16 10:04:43.205580 containerd[1591]: time="2025-05-16T10:04:43.205448965Z" level=info msg="StartContainer for \"de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def\" returns successfully" May 16 10:04:43.215911 systemd[1]: cri-containerd-de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def.scope: Deactivated successfully. May 16 10:04:43.218083 containerd[1591]: time="2025-05-16T10:04:43.218038714Z" level=info msg="received exit event container_id:\"de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def\" id:\"de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def\" pid:3477 exited_at:{seconds:1747389883 nanos:217568699}" May 16 10:04:43.218228 containerd[1591]: time="2025-05-16T10:04:43.218184115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def\" id:\"de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def\" pid:3477 exited_at:{seconds:1747389883 nanos:217568699}" May 16 10:04:43.240285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def-rootfs.mount: Deactivated successfully. May 16 10:04:43.776703 kubelet[2712]: E0516 10:04:43.776644 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:43.908637 kubelet[2712]: E0516 10:04:43.908608 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:43.910885 containerd[1591]: time="2025-05-16T10:04:43.910843500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 16 10:04:45.777760 kubelet[2712]: E0516 10:04:45.777680 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:47.776564 kubelet[2712]: E0516 10:04:47.776157 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:49.776271 kubelet[2712]: E0516 10:04:49.776214 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:50.187723 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:49664.service - OpenSSH per-connection server daemon (10.0.0.1:49664). May 16 10:04:50.240645 sshd[3519]: Accepted publickey for core from 10.0.0.1 port 49664 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:04:50.242262 sshd-session[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:04:50.247394 systemd-logind[1576]: New session 8 of user core. May 16 10:04:50.258690 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 10:04:51.178940 sshd[3521]: Connection closed by 10.0.0.1 port 49664 May 16 10:04:51.179228 sshd-session[3519]: pam_unix(sshd:session): session closed for user core May 16 10:04:51.183338 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:49664.service: Deactivated successfully. May 16 10:04:51.185636 systemd[1]: session-8.scope: Deactivated successfully. May 16 10:04:51.186593 systemd-logind[1576]: Session 8 logged out. Waiting for processes to exit. May 16 10:04:51.187905 systemd-logind[1576]: Removed session 8. May 16 10:04:51.776438 kubelet[2712]: E0516 10:04:51.776373 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:53.776203 kubelet[2712]: E0516 10:04:53.776116 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:54.202559 containerd[1591]: time="2025-05-16T10:04:54.202425193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:54.211324 containerd[1591]: time="2025-05-16T10:04:54.211299991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 16 10:04:54.213738 containerd[1591]: time="2025-05-16T10:04:54.213688756Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:54.215766 containerd[1591]: time="2025-05-16T10:04:54.215723286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:04:54.216314 containerd[1591]: time="2025-05-16T10:04:54.216280636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 10.305392627s" May 16 10:04:54.216314 containerd[1591]: time="2025-05-16T10:04:54.216306840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 16 10:04:54.219106 containerd[1591]: time="2025-05-16T10:04:54.219074259Z" level=info msg="CreateContainer within sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 10:04:54.226731 containerd[1591]: time="2025-05-16T10:04:54.226695637Z" level=info msg="Container 0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca: CDI devices from CRI Config.CDIDevices: []" May 16 10:04:54.240164 containerd[1591]: time="2025-05-16T10:04:54.240115686Z" level=info msg="CreateContainer within sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca\"" May 16 10:04:54.242537 containerd[1591]: time="2025-05-16T10:04:54.240623422Z" level=info msg="StartContainer for \"0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca\"" May 16 10:04:54.242537 containerd[1591]: time="2025-05-16T10:04:54.242257281Z" level=info msg="connecting to shim 0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca" address="unix:///run/containerd/s/de0a19da887bcded7dee23c148b32c1349f51492d48b029a54bbe25a05229872" protocol=ttrpc version=3 May 16 10:04:54.266655 systemd[1]: Started cri-containerd-0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca.scope - libcontainer container 0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca. May 16 10:04:54.617890 containerd[1591]: time="2025-05-16T10:04:54.617845319Z" level=info msg="StartContainer for \"0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca\" returns successfully" May 16 10:04:54.928694 kubelet[2712]: E0516 10:04:54.928569 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:55.645717 systemd[1]: cri-containerd-0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca.scope: Deactivated successfully. May 16 10:04:55.646101 systemd[1]: cri-containerd-0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca.scope: Consumed 528ms CPU time, 159.8M memory peak, 4K read from disk, 154M written to disk. May 16 10:04:55.648070 containerd[1591]: time="2025-05-16T10:04:55.648042057Z" level=info msg="received exit event container_id:\"0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca\" id:\"0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca\" pid:3554 exited_at:{seconds:1747389895 nanos:647851930}" May 16 10:04:55.648413 containerd[1591]: time="2025-05-16T10:04:55.648110861Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca\" id:\"0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca\" pid:3554 exited_at:{seconds:1747389895 nanos:647851930}" May 16 10:04:55.670016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca-rootfs.mount: Deactivated successfully. May 16 10:04:55.690548 kubelet[2712]: I0516 10:04:55.689780 2712 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 16 10:04:55.730970 systemd[1]: Created slice kubepods-burstable-pod02cb407d_a66d_415d_8bed_a4bbe798ffd0.slice - libcontainer container kubepods-burstable-pod02cb407d_a66d_415d_8bed_a4bbe798ffd0.slice. May 16 10:04:55.740571 systemd[1]: Created slice kubepods-burstable-pod2a935d52_e822_4eeb_870a_53e15a71d983.slice - libcontainer container kubepods-burstable-pod2a935d52_e822_4eeb_870a_53e15a71d983.slice. May 16 10:04:55.746544 systemd[1]: Created slice kubepods-besteffort-pod7187b294_e77c_457c_b05f_69fc94c6fde1.slice - libcontainer container kubepods-besteffort-pod7187b294_e77c_457c_b05f_69fc94c6fde1.slice. May 16 10:04:55.752575 systemd[1]: Created slice kubepods-besteffort-pod27ccac50_d53c_4ece_9d32_1bfac8cd5d95.slice - libcontainer container kubepods-besteffort-pod27ccac50_d53c_4ece_9d32_1bfac8cd5d95.slice. May 16 10:04:55.756931 systemd[1]: Created slice kubepods-besteffort-pod0b6655d4_90fa_4706_bd5a_d8806266de04.slice - libcontainer container kubepods-besteffort-pod0b6655d4_90fa_4706_bd5a_d8806266de04.slice. May 16 10:04:55.782126 systemd[1]: Created slice kubepods-besteffort-pod32e26eae_39a6_476b_8739_ed86db555147.slice - libcontainer container kubepods-besteffort-pod32e26eae_39a6_476b_8739_ed86db555147.slice. May 16 10:04:55.784186 containerd[1591]: time="2025-05-16T10:04:55.784155347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7jrt7,Uid:32e26eae-39a6-476b-8739-ed86db555147,Namespace:calico-system,Attempt:0,}" May 16 10:04:55.812420 kubelet[2712]: I0516 10:04:55.812378 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrb5p\" (UniqueName: \"kubernetes.io/projected/0b6655d4-90fa-4706-bd5a-d8806266de04-kube-api-access-vrb5p\") pod \"calico-apiserver-86d67fd756-ft5g4\" (UID: \"0b6655d4-90fa-4706-bd5a-d8806266de04\") " pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" May 16 10:04:55.812656 kubelet[2712]: I0516 10:04:55.812643 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7187b294-e77c-457c-b05f-69fc94c6fde1-tigera-ca-bundle\") pod \"calico-kube-controllers-57b996595-kz4qp\" (UID: \"7187b294-e77c-457c-b05f-69fc94c6fde1\") " pod="calico-system/calico-kube-controllers-57b996595-kz4qp" May 16 10:04:55.812763 kubelet[2712]: I0516 10:04:55.812749 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02cb407d-a66d-415d-8bed-a4bbe798ffd0-config-volume\") pod \"coredns-668d6bf9bc-bw78z\" (UID: \"02cb407d-a66d-415d-8bed-a4bbe798ffd0\") " pod="kube-system/coredns-668d6bf9bc-bw78z" May 16 10:04:55.812830 kubelet[2712]: I0516 10:04:55.812819 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b6655d4-90fa-4706-bd5a-d8806266de04-calico-apiserver-certs\") pod \"calico-apiserver-86d67fd756-ft5g4\" (UID: \"0b6655d4-90fa-4706-bd5a-d8806266de04\") " pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" May 16 10:04:55.812912 kubelet[2712]: I0516 10:04:55.812898 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x2c9\" (UniqueName: \"kubernetes.io/projected/02cb407d-a66d-415d-8bed-a4bbe798ffd0-kube-api-access-6x2c9\") pod \"coredns-668d6bf9bc-bw78z\" (UID: \"02cb407d-a66d-415d-8bed-a4bbe798ffd0\") " pod="kube-system/coredns-668d6bf9bc-bw78z" May 16 10:04:55.813060 kubelet[2712]: I0516 10:04:55.813007 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/27ccac50-d53c-4ece-9d32-1bfac8cd5d95-calico-apiserver-certs\") pod \"calico-apiserver-86d67fd756-7kxsr\" (UID: \"27ccac50-d53c-4ece-9d32-1bfac8cd5d95\") " pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" May 16 10:04:55.813060 kubelet[2712]: I0516 10:04:55.813028 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a935d52-e822-4eeb-870a-53e15a71d983-config-volume\") pod \"coredns-668d6bf9bc-mtmhb\" (UID: \"2a935d52-e822-4eeb-870a-53e15a71d983\") " pod="kube-system/coredns-668d6bf9bc-mtmhb" May 16 10:04:55.813231 kubelet[2712]: I0516 10:04:55.813044 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62d54\" (UniqueName: \"kubernetes.io/projected/7187b294-e77c-457c-b05f-69fc94c6fde1-kube-api-access-62d54\") pod \"calico-kube-controllers-57b996595-kz4qp\" (UID: \"7187b294-e77c-457c-b05f-69fc94c6fde1\") " pod="calico-system/calico-kube-controllers-57b996595-kz4qp" May 16 10:04:55.813231 kubelet[2712]: I0516 10:04:55.813175 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfp25\" (UniqueName: \"kubernetes.io/projected/27ccac50-d53c-4ece-9d32-1bfac8cd5d95-kube-api-access-sfp25\") pod \"calico-apiserver-86d67fd756-7kxsr\" (UID: \"27ccac50-d53c-4ece-9d32-1bfac8cd5d95\") " pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" May 16 10:04:55.813231 kubelet[2712]: I0516 10:04:55.813193 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsnw\" (UniqueName: \"kubernetes.io/projected/2a935d52-e822-4eeb-870a-53e15a71d983-kube-api-access-qpsnw\") pod \"coredns-668d6bf9bc-mtmhb\" (UID: \"2a935d52-e822-4eeb-870a-53e15a71d983\") " pod="kube-system/coredns-668d6bf9bc-mtmhb" May 16 10:04:55.844423 containerd[1591]: time="2025-05-16T10:04:55.844376821Z" level=error msg="Failed to destroy network for sandbox \"07cbb29009e9d488673041597373642514682d1307f09df874694e600dd29af3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:55.845982 containerd[1591]: time="2025-05-16T10:04:55.845948711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7jrt7,Uid:32e26eae-39a6-476b-8739-ed86db555147,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07cbb29009e9d488673041597373642514682d1307f09df874694e600dd29af3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:55.846551 systemd[1]: run-netns-cni\x2d3f106d05\x2d5eec\x2d6334\x2dc16d\x2dfde428b0a536.mount: Deactivated successfully. May 16 10:04:55.846645 kubelet[2712]: E0516 10:04:55.846604 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07cbb29009e9d488673041597373642514682d1307f09df874694e600dd29af3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:55.846685 kubelet[2712]: E0516 10:04:55.846667 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07cbb29009e9d488673041597373642514682d1307f09df874694e600dd29af3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7jrt7" May 16 10:04:55.846714 kubelet[2712]: E0516 10:04:55.846686 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07cbb29009e9d488673041597373642514682d1307f09df874694e600dd29af3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7jrt7" May 16 10:04:55.846742 kubelet[2712]: E0516 10:04:55.846725 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7jrt7_calico-system(32e26eae-39a6-476b-8739-ed86db555147)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7jrt7_calico-system(32e26eae-39a6-476b-8739-ed86db555147)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07cbb29009e9d488673041597373642514682d1307f09df874694e600dd29af3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:04:55.934677 kubelet[2712]: E0516 10:04:55.934592 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:55.936120 containerd[1591]: time="2025-05-16T10:04:55.936091209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 16 10:04:56.037832 kubelet[2712]: E0516 10:04:56.037779 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:56.038394 containerd[1591]: time="2025-05-16T10:04:56.038353829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bw78z,Uid:02cb407d-a66d-415d-8bed-a4bbe798ffd0,Namespace:kube-system,Attempt:0,}" May 16 10:04:56.044016 kubelet[2712]: E0516 10:04:56.043975 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:04:56.044533 containerd[1591]: time="2025-05-16T10:04:56.044482608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtmhb,Uid:2a935d52-e822-4eeb-870a-53e15a71d983,Namespace:kube-system,Attempt:0,}" May 16 10:04:56.049935 containerd[1591]: time="2025-05-16T10:04:56.049639619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b996595-kz4qp,Uid:7187b294-e77c-457c-b05f-69fc94c6fde1,Namespace:calico-system,Attempt:0,}" May 16 10:04:56.055804 containerd[1591]: time="2025-05-16T10:04:56.055767816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-7kxsr,Uid:27ccac50-d53c-4ece-9d32-1bfac8cd5d95,Namespace:calico-apiserver,Attempt:0,}" May 16 10:04:56.062450 containerd[1591]: time="2025-05-16T10:04:56.062374974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-ft5g4,Uid:0b6655d4-90fa-4706-bd5a-d8806266de04,Namespace:calico-apiserver,Attempt:0,}" May 16 10:04:56.116168 containerd[1591]: time="2025-05-16T10:04:56.116083519Z" level=error msg="Failed to destroy network for sandbox \"0d06491def308014b06395bba9ed0fe4b4aa307983a994b1d403a033c65ba8dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.117991 containerd[1591]: time="2025-05-16T10:04:56.117896491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bw78z,Uid:02cb407d-a66d-415d-8bed-a4bbe798ffd0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d06491def308014b06395bba9ed0fe4b4aa307983a994b1d403a033c65ba8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.118267 kubelet[2712]: E0516 10:04:56.118222 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d06491def308014b06395bba9ed0fe4b4aa307983a994b1d403a033c65ba8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.118329 kubelet[2712]: E0516 10:04:56.118309 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d06491def308014b06395bba9ed0fe4b4aa307983a994b1d403a033c65ba8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bw78z" May 16 10:04:56.118526 kubelet[2712]: E0516 10:04:56.118332 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d06491def308014b06395bba9ed0fe4b4aa307983a994b1d403a033c65ba8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bw78z" May 16 10:04:56.118587 kubelet[2712]: E0516 10:04:56.118424 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bw78z_kube-system(02cb407d-a66d-415d-8bed-a4bbe798ffd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bw78z_kube-system(02cb407d-a66d-415d-8bed-a4bbe798ffd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d06491def308014b06395bba9ed0fe4b4aa307983a994b1d403a033c65ba8dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bw78z" podUID="02cb407d-a66d-415d-8bed-a4bbe798ffd0" May 16 10:04:56.147930 containerd[1591]: time="2025-05-16T10:04:56.147868935Z" level=error msg="Failed to destroy network for sandbox \"8bd2c12afec2d4f29c18ab242043785a9df8bb20ca04c20738a120d28c6b685b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.148932 containerd[1591]: time="2025-05-16T10:04:56.148823968Z" level=error msg="Failed to destroy network for sandbox \"951bc293aa0fe154d0bc705a2ce852f75ba844043f5df0bd2cdcd4a62c505e47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.149656 containerd[1591]: time="2025-05-16T10:04:56.149630171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtmhb,Uid:2a935d52-e822-4eeb-870a-53e15a71d983,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd2c12afec2d4f29c18ab242043785a9df8bb20ca04c20738a120d28c6b685b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.150088 kubelet[2712]: E0516 10:04:56.150027 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd2c12afec2d4f29c18ab242043785a9df8bb20ca04c20738a120d28c6b685b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.150261 kubelet[2712]: E0516 10:04:56.150240 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd2c12afec2d4f29c18ab242043785a9df8bb20ca04c20738a120d28c6b685b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mtmhb" May 16 10:04:56.150335 kubelet[2712]: E0516 10:04:56.150312 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd2c12afec2d4f29c18ab242043785a9df8bb20ca04c20738a120d28c6b685b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mtmhb" May 16 10:04:56.150458 kubelet[2712]: E0516 10:04:56.150434 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mtmhb_kube-system(2a935d52-e822-4eeb-870a-53e15a71d983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mtmhb_kube-system(2a935d52-e822-4eeb-870a-53e15a71d983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bd2c12afec2d4f29c18ab242043785a9df8bb20ca04c20738a120d28c6b685b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mtmhb" podUID="2a935d52-e822-4eeb-870a-53e15a71d983" May 16 10:04:56.151597 containerd[1591]: time="2025-05-16T10:04:56.151405395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b996595-kz4qp,Uid:7187b294-e77c-457c-b05f-69fc94c6fde1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"951bc293aa0fe154d0bc705a2ce852f75ba844043f5df0bd2cdcd4a62c505e47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.151988 kubelet[2712]: E0516 10:04:56.151953 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951bc293aa0fe154d0bc705a2ce852f75ba844043f5df0bd2cdcd4a62c505e47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.152268 kubelet[2712]: E0516 10:04:56.152115 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951bc293aa0fe154d0bc705a2ce852f75ba844043f5df0bd2cdcd4a62c505e47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b996595-kz4qp" May 16 10:04:56.152268 kubelet[2712]: E0516 10:04:56.152146 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951bc293aa0fe154d0bc705a2ce852f75ba844043f5df0bd2cdcd4a62c505e47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b996595-kz4qp" May 16 10:04:56.152363 kubelet[2712]: E0516 10:04:56.152233 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57b996595-kz4qp_calico-system(7187b294-e77c-457c-b05f-69fc94c6fde1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57b996595-kz4qp_calico-system(7187b294-e77c-457c-b05f-69fc94c6fde1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"951bc293aa0fe154d0bc705a2ce852f75ba844043f5df0bd2cdcd4a62c505e47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57b996595-kz4qp" podUID="7187b294-e77c-457c-b05f-69fc94c6fde1" May 16 10:04:56.157371 containerd[1591]: time="2025-05-16T10:04:56.157319206Z" level=error msg="Failed to destroy network for sandbox \"3cb6d42149afdd23c25812ab6bcae8bbe551f53674dafda6c15f256ac668ed6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.157821 containerd[1591]: time="2025-05-16T10:04:56.157788655Z" level=error msg="Failed to destroy network for sandbox \"ae3fd3b4a26ecc7a87b876775d1e64361ac422edf4b4278de58849fd7637c53e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.158809 containerd[1591]: time="2025-05-16T10:04:56.158777578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-7kxsr,Uid:27ccac50-d53c-4ece-9d32-1bfac8cd5d95,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb6d42149afdd23c25812ab6bcae8bbe551f53674dafda6c15f256ac668ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.159015 kubelet[2712]: E0516 10:04:56.158974 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb6d42149afdd23c25812ab6bcae8bbe551f53674dafda6c15f256ac668ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.159124 kubelet[2712]: E0516 10:04:56.159031 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb6d42149afdd23c25812ab6bcae8bbe551f53674dafda6c15f256ac668ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" May 16 10:04:56.159124 kubelet[2712]: E0516 10:04:56.159058 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb6d42149afdd23c25812ab6bcae8bbe551f53674dafda6c15f256ac668ed6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" May 16 10:04:56.159124 kubelet[2712]: E0516 10:04:56.159100 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86d67fd756-7kxsr_calico-apiserver(27ccac50-d53c-4ece-9d32-1bfac8cd5d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86d67fd756-7kxsr_calico-apiserver(27ccac50-d53c-4ece-9d32-1bfac8cd5d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cb6d42149afdd23c25812ab6bcae8bbe551f53674dafda6c15f256ac668ed6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" podUID="27ccac50-d53c-4ece-9d32-1bfac8cd5d95" May 16 10:04:56.159731 containerd[1591]: time="2025-05-16T10:04:56.159704393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-ft5g4,Uid:0b6655d4-90fa-4706-bd5a-d8806266de04,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3fd3b4a26ecc7a87b876775d1e64361ac422edf4b4278de58849fd7637c53e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.159823 kubelet[2712]: E0516 10:04:56.159803 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3fd3b4a26ecc7a87b876775d1e64361ac422edf4b4278de58849fd7637c53e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:04:56.159870 kubelet[2712]: E0516 10:04:56.159828 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3fd3b4a26ecc7a87b876775d1e64361ac422edf4b4278de58849fd7637c53e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" May 16 10:04:56.159870 kubelet[2712]: E0516 10:04:56.159841 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3fd3b4a26ecc7a87b876775d1e64361ac422edf4b4278de58849fd7637c53e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" May 16 10:04:56.159942 kubelet[2712]: E0516 10:04:56.159876 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86d67fd756-ft5g4_calico-apiserver(0b6655d4-90fa-4706-bd5a-d8806266de04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86d67fd756-ft5g4_calico-apiserver(0b6655d4-90fa-4706-bd5a-d8806266de04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae3fd3b4a26ecc7a87b876775d1e64361ac422edf4b4278de58849fd7637c53e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" podUID="0b6655d4-90fa-4706-bd5a-d8806266de04" May 16 10:04:56.196196 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:49680.service - OpenSSH per-connection server daemon (10.0.0.1:49680). May 16 10:04:56.252812 sshd[3821]: Accepted publickey for core from 10.0.0.1 port 49680 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:04:56.254324 sshd-session[3821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:04:56.259373 systemd-logind[1576]: New session 9 of user core. May 16 10:04:56.271676 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 10:04:56.380919 sshd[3823]: Connection closed by 10.0.0.1 port 49680 May 16 10:04:56.381258 sshd-session[3821]: pam_unix(sshd:session): session closed for user core May 16 10:04:56.385792 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:49680.service: Deactivated successfully. May 16 10:04:56.387787 systemd[1]: session-9.scope: Deactivated successfully. May 16 10:04:56.388631 systemd-logind[1576]: Session 9 logged out. Waiting for processes to exit. May 16 10:04:56.389666 systemd-logind[1576]: Removed session 9. May 16 10:05:01.399139 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:32846.service - OpenSSH per-connection server daemon (10.0.0.1:32846). May 16 10:05:01.463713 sshd[3840]: Accepted publickey for core from 10.0.0.1 port 32846 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:01.465569 sshd-session[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:01.470560 systemd-logind[1576]: New session 10 of user core. May 16 10:05:01.482692 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 10:05:01.609449 sshd[3842]: Connection closed by 10.0.0.1 port 32846 May 16 10:05:01.609933 sshd-session[3840]: pam_unix(sshd:session): session closed for user core May 16 10:05:01.615230 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:32846.service: Deactivated successfully. May 16 10:05:01.617295 systemd[1]: session-10.scope: Deactivated successfully. May 16 10:05:01.618330 systemd-logind[1576]: Session 10 logged out. Waiting for processes to exit. May 16 10:05:01.619685 systemd-logind[1576]: Removed session 10. May 16 10:05:05.210365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756220578.mount: Deactivated successfully. May 16 10:05:06.046985 containerd[1591]: time="2025-05-16T10:05:06.046920022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:06.047835 containerd[1591]: time="2025-05-16T10:05:06.047803548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 16 10:05:06.048921 containerd[1591]: time="2025-05-16T10:05:06.048872804Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:06.050733 containerd[1591]: time="2025-05-16T10:05:06.050677092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:06.051355 containerd[1591]: time="2025-05-16T10:05:06.051322170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 10.115193544s" May 16 10:05:06.051398 containerd[1591]: time="2025-05-16T10:05:06.051351210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 16 10:05:06.061329 containerd[1591]: time="2025-05-16T10:05:06.061278564Z" level=info msg="CreateContainer within sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 10:05:06.073291 containerd[1591]: time="2025-05-16T10:05:06.073258494Z" level=info msg="Container 9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:06.083886 containerd[1591]: time="2025-05-16T10:05:06.083854034Z" level=info msg="CreateContainer within sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\"" May 16 10:05:06.084380 containerd[1591]: time="2025-05-16T10:05:06.084349636Z" level=info msg="StartContainer for \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\"" May 16 10:05:06.085643 containerd[1591]: time="2025-05-16T10:05:06.085619643Z" level=info msg="connecting to shim 9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2" address="unix:///run/containerd/s/de0a19da887bcded7dee23c148b32c1349f51492d48b029a54bbe25a05229872" protocol=ttrpc version=3 May 16 10:05:06.113666 systemd[1]: Started cri-containerd-9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2.scope - libcontainer container 9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2. May 16 10:05:06.162039 containerd[1591]: time="2025-05-16T10:05:06.161577708Z" level=info msg="StartContainer for \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" returns successfully" May 16 10:05:06.227040 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 10:05:06.228320 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 10:05:06.256633 systemd[1]: cri-containerd-9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2.scope: Deactivated successfully. May 16 10:05:06.258803 containerd[1591]: time="2025-05-16T10:05:06.258767547Z" level=info msg="received exit event container_id:\"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" id:\"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" pid:3878 exit_status:1 exited_at:{seconds:1747389906 nanos:258559943}" May 16 10:05:06.258946 containerd[1591]: time="2025-05-16T10:05:06.258891851Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" id:\"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" pid:3878 exit_status:1 exited_at:{seconds:1747389906 nanos:258559943}" May 16 10:05:06.285664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2-rootfs.mount: Deactivated successfully. May 16 10:05:06.626429 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:32858.service - OpenSSH per-connection server daemon (10.0.0.1:32858). May 16 10:05:06.776927 containerd[1591]: time="2025-05-16T10:05:06.776884376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-7kxsr,Uid:27ccac50-d53c-4ece-9d32-1bfac8cd5d95,Namespace:calico-apiserver,Attempt:0,}" May 16 10:05:06.955969 kubelet[2712]: E0516 10:05:06.955859 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:07.377832 sshd[3923]: Accepted publickey for core from 10.0.0.1 port 32858 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:07.379415 sshd-session[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:07.383677 systemd-logind[1576]: New session 11 of user core. May 16 10:05:07.393642 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 10:05:07.546507 sshd[3925]: Connection closed by 10.0.0.1 port 32858 May 16 10:05:07.546855 sshd-session[3923]: pam_unix(sshd:session): session closed for user core May 16 10:05:07.551278 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:32858.service: Deactivated successfully. May 16 10:05:07.553388 systemd[1]: session-11.scope: Deactivated successfully. May 16 10:05:07.554180 systemd-logind[1576]: Session 11 logged out. Waiting for processes to exit. May 16 10:05:07.556023 systemd-logind[1576]: Removed session 11. May 16 10:05:07.777135 kubelet[2712]: E0516 10:05:07.777099 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:07.777644 containerd[1591]: time="2025-05-16T10:05:07.777601356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bw78z,Uid:02cb407d-a66d-415d-8bed-a4bbe798ffd0,Namespace:kube-system,Attempt:0,}" May 16 10:05:07.778076 containerd[1591]: time="2025-05-16T10:05:07.777764249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-ft5g4,Uid:0b6655d4-90fa-4706-bd5a-d8806266de04,Namespace:calico-apiserver,Attempt:0,}" May 16 10:05:07.778076 containerd[1591]: time="2025-05-16T10:05:07.777959979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7jrt7,Uid:32e26eae-39a6-476b-8739-ed86db555147,Namespace:calico-system,Attempt:0,}" May 16 10:05:07.957109 kubelet[2712]: E0516 10:05:07.957077 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:08.337480 containerd[1591]: time="2025-05-16T10:05:08.335869574Z" level=error msg="ExecSync for \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"250341357eb8921722011be047af824123aa6848f17dea8b77c170561f23c9d6\": task 9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2 not found" May 16 10:05:08.337978 kubelet[2712]: E0516 10:05:08.337929 2712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"250341357eb8921722011be047af824123aa6848f17dea8b77c170561f23c9d6\": task 9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2 not found" containerID="9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 16 10:05:08.371302 containerd[1591]: time="2025-05-16T10:05:08.371248285Z" level=error msg="ExecSync for \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" May 16 10:05:08.371744 kubelet[2712]: E0516 10:05:08.371698 2712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 16 10:05:08.372167 containerd[1591]: time="2025-05-16T10:05:08.372113479Z" level=error msg="ExecSync for \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" May 16 10:05:08.372456 kubelet[2712]: E0516 10:05:08.372253 2712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 16 10:05:08.374626 containerd[1591]: time="2025-05-16T10:05:08.374439140Z" level=error msg="ExecSync for \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" May 16 10:05:08.375827 kubelet[2712]: E0516 10:05:08.375713 2712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 16 10:05:08.375955 containerd[1591]: time="2025-05-16T10:05:08.375916662Z" level=error msg="ExecSync for \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" May 16 10:05:08.376193 kubelet[2712]: E0516 10:05:08.376046 2712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 16 10:05:08.376234 containerd[1591]: time="2025-05-16T10:05:08.376214539Z" level=error msg="ExecSync for \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" May 16 10:05:08.376476 kubelet[2712]: E0516 10:05:08.376406 2712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 16 10:05:08.407396 containerd[1591]: time="2025-05-16T10:05:08.407245558Z" level=error msg="Failed to destroy network for sandbox \"fc4665b877bfc85bcd83682b90f23e41e71915878fc87a17cbe6886bb6864fa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.409979 systemd[1]: run-netns-cni\x2d9869fb90\x2d91b4\x2d062a\x2dda78\x2d0beeea6cf501.mount: Deactivated successfully. May 16 10:05:08.411164 containerd[1591]: time="2025-05-16T10:05:08.411081127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7jrt7,Uid:32e26eae-39a6-476b-8739-ed86db555147,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc4665b877bfc85bcd83682b90f23e41e71915878fc87a17cbe6886bb6864fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.411771 kubelet[2712]: E0516 10:05:08.411730 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc4665b877bfc85bcd83682b90f23e41e71915878fc87a17cbe6886bb6864fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.411824 kubelet[2712]: E0516 10:05:08.411804 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc4665b877bfc85bcd83682b90f23e41e71915878fc87a17cbe6886bb6864fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7jrt7" May 16 10:05:08.411849 kubelet[2712]: E0516 10:05:08.411833 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc4665b877bfc85bcd83682b90f23e41e71915878fc87a17cbe6886bb6864fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7jrt7" May 16 10:05:08.411915 kubelet[2712]: E0516 10:05:08.411880 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7jrt7_calico-system(32e26eae-39a6-476b-8739-ed86db555147)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7jrt7_calico-system(32e26eae-39a6-476b-8739-ed86db555147)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc4665b877bfc85bcd83682b90f23e41e71915878fc87a17cbe6886bb6864fa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:05:08.421098 containerd[1591]: time="2025-05-16T10:05:08.421053305Z" level=error msg="Failed to destroy network for sandbox \"71d910c278e33a51bae6e8ed86a5ca5ad13f3a04b1d83f666a66cc4b42c46fdd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.422706 containerd[1591]: time="2025-05-16T10:05:08.422358806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bw78z,Uid:02cb407d-a66d-415d-8bed-a4bbe798ffd0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d910c278e33a51bae6e8ed86a5ca5ad13f3a04b1d83f666a66cc4b42c46fdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.422801 kubelet[2712]: E0516 10:05:08.422747 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d910c278e33a51bae6e8ed86a5ca5ad13f3a04b1d83f666a66cc4b42c46fdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.422852 kubelet[2712]: E0516 10:05:08.422823 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d910c278e33a51bae6e8ed86a5ca5ad13f3a04b1d83f666a66cc4b42c46fdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bw78z" May 16 10:05:08.422885 kubelet[2712]: E0516 10:05:08.422861 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71d910c278e33a51bae6e8ed86a5ca5ad13f3a04b1d83f666a66cc4b42c46fdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bw78z" May 16 10:05:08.422958 kubelet[2712]: E0516 10:05:08.422910 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bw78z_kube-system(02cb407d-a66d-415d-8bed-a4bbe798ffd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bw78z_kube-system(02cb407d-a66d-415d-8bed-a4bbe798ffd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71d910c278e33a51bae6e8ed86a5ca5ad13f3a04b1d83f666a66cc4b42c46fdd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bw78z" podUID="02cb407d-a66d-415d-8bed-a4bbe798ffd0" May 16 10:05:08.431600 containerd[1591]: time="2025-05-16T10:05:08.431557008Z" level=error msg="Failed to destroy network for sandbox \"5525e98027d1e9c40f4bf8b92dba1b3d7f71fbafc6606dd307136670ff684253\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.433779 containerd[1591]: time="2025-05-16T10:05:08.433733945Z" level=error msg="Failed to destroy network for sandbox \"b08a8699039ee4525f05ff8a473bddf929c616d5e3e8b341d368823da4cb927b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.434783 containerd[1591]: time="2025-05-16T10:05:08.434674594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-7kxsr,Uid:27ccac50-d53c-4ece-9d32-1bfac8cd5d95,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5525e98027d1e9c40f4bf8b92dba1b3d7f71fbafc6606dd307136670ff684253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.434972 kubelet[2712]: E0516 10:05:08.434927 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5525e98027d1e9c40f4bf8b92dba1b3d7f71fbafc6606dd307136670ff684253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.435109 kubelet[2712]: E0516 10:05:08.434981 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5525e98027d1e9c40f4bf8b92dba1b3d7f71fbafc6606dd307136670ff684253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" May 16 10:05:08.435109 kubelet[2712]: E0516 10:05:08.435001 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5525e98027d1e9c40f4bf8b92dba1b3d7f71fbafc6606dd307136670ff684253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" May 16 10:05:08.435109 kubelet[2712]: E0516 10:05:08.435041 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86d67fd756-7kxsr_calico-apiserver(27ccac50-d53c-4ece-9d32-1bfac8cd5d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86d67fd756-7kxsr_calico-apiserver(27ccac50-d53c-4ece-9d32-1bfac8cd5d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5525e98027d1e9c40f4bf8b92dba1b3d7f71fbafc6606dd307136670ff684253\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" podUID="27ccac50-d53c-4ece-9d32-1bfac8cd5d95" May 16 10:05:08.435714 containerd[1591]: time="2025-05-16T10:05:08.435671285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-ft5g4,Uid:0b6655d4-90fa-4706-bd5a-d8806266de04,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08a8699039ee4525f05ff8a473bddf929c616d5e3e8b341d368823da4cb927b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.435891 kubelet[2712]: E0516 10:05:08.435853 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08a8699039ee4525f05ff8a473bddf929c616d5e3e8b341d368823da4cb927b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.435950 kubelet[2712]: E0516 10:05:08.435918 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08a8699039ee4525f05ff8a473bddf929c616d5e3e8b341d368823da4cb927b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" May 16 10:05:08.435950 kubelet[2712]: E0516 10:05:08.435943 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08a8699039ee4525f05ff8a473bddf929c616d5e3e8b341d368823da4cb927b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" May 16 10:05:08.436020 kubelet[2712]: E0516 10:05:08.435996 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86d67fd756-ft5g4_calico-apiserver(0b6655d4-90fa-4706-bd5a-d8806266de04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86d67fd756-ft5g4_calico-apiserver(0b6655d4-90fa-4706-bd5a-d8806266de04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b08a8699039ee4525f05ff8a473bddf929c616d5e3e8b341d368823da4cb927b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" podUID="0b6655d4-90fa-4706-bd5a-d8806266de04" May 16 10:05:08.776703 kubelet[2712]: E0516 10:05:08.776670 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:08.777082 containerd[1591]: time="2025-05-16T10:05:08.776917785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b996595-kz4qp,Uid:7187b294-e77c-457c-b05f-69fc94c6fde1,Namespace:calico-system,Attempt:0,}" May 16 10:05:08.777082 containerd[1591]: time="2025-05-16T10:05:08.777005764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtmhb,Uid:2a935d52-e822-4eeb-870a-53e15a71d983,Namespace:kube-system,Attempt:0,}" May 16 10:05:08.833122 containerd[1591]: time="2025-05-16T10:05:08.833049262Z" level=error msg="Failed to destroy network for sandbox \"068079fd7aa69521cee08426503e266107c9e6ec4882a2c9e043ebe06ff23986\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.834434 containerd[1591]: time="2025-05-16T10:05:08.834385516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtmhb,Uid:2a935d52-e822-4eeb-870a-53e15a71d983,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"068079fd7aa69521cee08426503e266107c9e6ec4882a2c9e043ebe06ff23986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.834671 kubelet[2712]: E0516 10:05:08.834633 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068079fd7aa69521cee08426503e266107c9e6ec4882a2c9e043ebe06ff23986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.834744 kubelet[2712]: E0516 10:05:08.834696 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068079fd7aa69521cee08426503e266107c9e6ec4882a2c9e043ebe06ff23986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mtmhb" May 16 10:05:08.834744 kubelet[2712]: E0516 10:05:08.834719 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068079fd7aa69521cee08426503e266107c9e6ec4882a2c9e043ebe06ff23986\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mtmhb" May 16 10:05:08.834883 kubelet[2712]: E0516 10:05:08.834766 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mtmhb_kube-system(2a935d52-e822-4eeb-870a-53e15a71d983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mtmhb_kube-system(2a935d52-e822-4eeb-870a-53e15a71d983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"068079fd7aa69521cee08426503e266107c9e6ec4882a2c9e043ebe06ff23986\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mtmhb" podUID="2a935d52-e822-4eeb-870a-53e15a71d983" May 16 10:05:08.836883 containerd[1591]: time="2025-05-16T10:05:08.836824919Z" level=error msg="Failed to destroy network for sandbox \"d1ec2788f7f7fb35bf7d6a1eaffc1c59206dc129f254c53d0c30e07253bb97c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.838173 containerd[1591]: time="2025-05-16T10:05:08.838135130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b996595-kz4qp,Uid:7187b294-e77c-457c-b05f-69fc94c6fde1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1ec2788f7f7fb35bf7d6a1eaffc1c59206dc129f254c53d0c30e07253bb97c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.838406 kubelet[2712]: E0516 10:05:08.838368 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1ec2788f7f7fb35bf7d6a1eaffc1c59206dc129f254c53d0c30e07253bb97c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:08.838459 kubelet[2712]: E0516 10:05:08.838440 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1ec2788f7f7fb35bf7d6a1eaffc1c59206dc129f254c53d0c30e07253bb97c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b996595-kz4qp" May 16 10:05:08.838495 kubelet[2712]: E0516 10:05:08.838469 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1ec2788f7f7fb35bf7d6a1eaffc1c59206dc129f254c53d0c30e07253bb97c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b996595-kz4qp" May 16 10:05:08.838583 kubelet[2712]: E0516 10:05:08.838552 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57b996595-kz4qp_calico-system(7187b294-e77c-457c-b05f-69fc94c6fde1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57b996595-kz4qp_calico-system(7187b294-e77c-457c-b05f-69fc94c6fde1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1ec2788f7f7fb35bf7d6a1eaffc1c59206dc129f254c53d0c30e07253bb97c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57b996595-kz4qp" podUID="7187b294-e77c-457c-b05f-69fc94c6fde1" May 16 10:05:08.962838 kubelet[2712]: I0516 10:05:08.962803 2712 scope.go:117] "RemoveContainer" containerID="9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2" May 16 10:05:08.963229 kubelet[2712]: E0516 10:05:08.962885 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:08.966195 containerd[1591]: time="2025-05-16T10:05:08.966145657Z" level=info msg="CreateContainer within sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" May 16 10:05:08.975547 containerd[1591]: time="2025-05-16T10:05:08.975444122Z" level=info msg="Container fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:08.986216 containerd[1591]: time="2025-05-16T10:05:08.986176611Z" level=info msg="CreateContainer within sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a\"" May 16 10:05:08.986740 containerd[1591]: time="2025-05-16T10:05:08.986682331Z" level=info msg="StartContainer for \"fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a\"" May 16 10:05:08.990768 containerd[1591]: time="2025-05-16T10:05:08.990728920Z" level=info msg="connecting to shim fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a" address="unix:///run/containerd/s/de0a19da887bcded7dee23c148b32c1349f51492d48b029a54bbe25a05229872" protocol=ttrpc version=3 May 16 10:05:09.015646 systemd[1]: Started cri-containerd-fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a.scope - libcontainer container fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a. May 16 10:05:09.059785 containerd[1591]: time="2025-05-16T10:05:09.059684829Z" level=info msg="StartContainer for \"fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a\" returns successfully" May 16 10:05:09.108678 systemd[1]: cri-containerd-fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a.scope: Deactivated successfully. May 16 10:05:09.109474 containerd[1591]: time="2025-05-16T10:05:09.109435941Z" level=info msg="received exit event container_id:\"fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a\" id:\"fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a\" pid:4175 exit_status:1 exited_at:{seconds:1747389909 nanos:109205462}" May 16 10:05:09.109873 containerd[1591]: time="2025-05-16T10:05:09.109573080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a\" id:\"fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a\" pid:4175 exit_status:1 exited_at:{seconds:1747389909 nanos:109205462}" May 16 10:05:09.312041 systemd[1]: run-netns-cni\x2dcd5978ee\x2dc46a\x2defb6\x2d7fbd\x2d77c6afa3530e.mount: Deactivated successfully. May 16 10:05:09.312163 systemd[1]: run-netns-cni\x2d5741bc88\x2ddc0a\x2d4639\x2dd6db\x2d18f8be296bf4.mount: Deactivated successfully. May 16 10:05:09.312239 systemd[1]: run-netns-cni\x2dd5a3b7d8\x2de90c\x2d65e7\x2d0e66\x2dbad29a521ae4.mount: Deactivated successfully. May 16 10:05:09.967978 kubelet[2712]: I0516 10:05:09.967930 2712 scope.go:117] "RemoveContainer" containerID="9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2" May 16 10:05:09.968460 kubelet[2712]: I0516 10:05:09.968195 2712 scope.go:117] "RemoveContainer" containerID="fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a" May 16 10:05:09.968460 kubelet[2712]: E0516 10:05:09.968247 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:09.968460 kubelet[2712]: E0516 10:05:09.968355 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-cm57r_calico-system(57d462f3-4a3d-42b3-9610-fcce09d094b9)\"" pod="calico-system/calico-node-cm57r" podUID="57d462f3-4a3d-42b3-9610-fcce09d094b9" May 16 10:05:09.970876 containerd[1591]: time="2025-05-16T10:05:09.970822227Z" level=info msg="RemoveContainer for \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\"" May 16 10:05:10.000167 containerd[1591]: time="2025-05-16T10:05:10.000104415Z" level=info msg="RemoveContainer for \"9007bb7d0c711c4690623eec0a4e57be3a268dc28f3f9e8075473b3ea8f95af2\" returns successfully" May 16 10:05:10.973722 kubelet[2712]: I0516 10:05:10.973691 2712 scope.go:117] "RemoveContainer" containerID="fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a" May 16 10:05:10.974140 kubelet[2712]: E0516 10:05:10.973768 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:10.974140 kubelet[2712]: E0516 10:05:10.973849 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-cm57r_calico-system(57d462f3-4a3d-42b3-9610-fcce09d094b9)\"" pod="calico-system/calico-node-cm57r" podUID="57d462f3-4a3d-42b3-9610-fcce09d094b9" May 16 10:05:11.975378 kubelet[2712]: I0516 10:05:11.975334 2712 scope.go:117] "RemoveContainer" containerID="fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a" May 16 10:05:11.975846 kubelet[2712]: E0516 10:05:11.975411 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:11.975846 kubelet[2712]: E0516 10:05:11.975506 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-cm57r_calico-system(57d462f3-4a3d-42b3-9610-fcce09d094b9)\"" pod="calico-system/calico-node-cm57r" podUID="57d462f3-4a3d-42b3-9610-fcce09d094b9" May 16 10:05:12.564767 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:48112.service - OpenSSH per-connection server daemon (10.0.0.1:48112). May 16 10:05:12.622412 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 48112 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:12.623908 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:12.628199 systemd-logind[1576]: New session 12 of user core. May 16 10:05:12.640647 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 10:05:12.753762 sshd[4213]: Connection closed by 10.0.0.1 port 48112 May 16 10:05:12.754042 sshd-session[4211]: pam_unix(sshd:session): session closed for user core May 16 10:05:12.757844 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:48112.service: Deactivated successfully. May 16 10:05:12.759893 systemd[1]: session-12.scope: Deactivated successfully. May 16 10:05:12.760626 systemd-logind[1576]: Session 12 logged out. Waiting for processes to exit. May 16 10:05:12.761937 systemd-logind[1576]: Removed session 12. May 16 10:05:17.778454 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:48128.service - OpenSSH per-connection server daemon (10.0.0.1:48128). May 16 10:05:17.832015 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 48128 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:17.833298 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:17.837110 systemd-logind[1576]: New session 13 of user core. May 16 10:05:17.846648 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 10:05:17.952049 sshd[4231]: Connection closed by 10.0.0.1 port 48128 May 16 10:05:17.952349 sshd-session[4229]: pam_unix(sshd:session): session closed for user core May 16 10:05:17.962880 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:48128.service: Deactivated successfully. May 16 10:05:17.964493 systemd[1]: session-13.scope: Deactivated successfully. May 16 10:05:17.965327 systemd-logind[1576]: Session 13 logged out. Waiting for processes to exit. May 16 10:05:17.968009 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:48134.service - OpenSSH per-connection server daemon (10.0.0.1:48134). May 16 10:05:17.968613 systemd-logind[1576]: Removed session 13. May 16 10:05:18.018289 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 48134 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:18.019624 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:18.023602 systemd-logind[1576]: New session 14 of user core. May 16 10:05:18.033643 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 10:05:18.163691 sshd[4247]: Connection closed by 10.0.0.1 port 48134 May 16 10:05:18.164012 sshd-session[4245]: pam_unix(sshd:session): session closed for user core May 16 10:05:18.173834 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:48134.service: Deactivated successfully. May 16 10:05:18.176431 systemd[1]: session-14.scope: Deactivated successfully. May 16 10:05:18.177830 systemd-logind[1576]: Session 14 logged out. Waiting for processes to exit. May 16 10:05:18.183825 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:54600.service - OpenSSH per-connection server daemon (10.0.0.1:54600). May 16 10:05:18.186016 systemd-logind[1576]: Removed session 14. May 16 10:05:18.229859 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 54600 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:18.231114 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:18.235398 systemd-logind[1576]: New session 15 of user core. May 16 10:05:18.245670 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 10:05:18.355318 sshd[4260]: Connection closed by 10.0.0.1 port 54600 May 16 10:05:18.355561 sshd-session[4258]: pam_unix(sshd:session): session closed for user core May 16 10:05:18.360008 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:54600.service: Deactivated successfully. May 16 10:05:18.362002 systemd[1]: session-15.scope: Deactivated successfully. May 16 10:05:18.362840 systemd-logind[1576]: Session 15 logged out. Waiting for processes to exit. May 16 10:05:18.364044 systemd-logind[1576]: Removed session 15. May 16 10:05:18.776956 containerd[1591]: time="2025-05-16T10:05:18.776912713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7jrt7,Uid:32e26eae-39a6-476b-8739-ed86db555147,Namespace:calico-system,Attempt:0,}" May 16 10:05:18.849252 containerd[1591]: time="2025-05-16T10:05:18.849179397Z" level=error msg="Failed to destroy network for sandbox \"eca187d2496082c4be46c4772b094cb39041a9fb6125536392190518602df03b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:18.850733 containerd[1591]: time="2025-05-16T10:05:18.850680728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7jrt7,Uid:32e26eae-39a6-476b-8739-ed86db555147,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eca187d2496082c4be46c4772b094cb39041a9fb6125536392190518602df03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:18.850948 kubelet[2712]: E0516 10:05:18.850902 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eca187d2496082c4be46c4772b094cb39041a9fb6125536392190518602df03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:18.851291 kubelet[2712]: E0516 10:05:18.850972 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eca187d2496082c4be46c4772b094cb39041a9fb6125536392190518602df03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7jrt7" May 16 10:05:18.851291 kubelet[2712]: E0516 10:05:18.850993 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eca187d2496082c4be46c4772b094cb39041a9fb6125536392190518602df03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7jrt7" May 16 10:05:18.851291 kubelet[2712]: E0516 10:05:18.851035 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7jrt7_calico-system(32e26eae-39a6-476b-8739-ed86db555147)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7jrt7_calico-system(32e26eae-39a6-476b-8739-ed86db555147)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eca187d2496082c4be46c4772b094cb39041a9fb6125536392190518602df03b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:05:18.852330 systemd[1]: run-netns-cni\x2db20e4ecc\x2d3820\x2d725e\x2dc371\x2d2ebaa68282c3.mount: Deactivated successfully. May 16 10:05:20.776398 containerd[1591]: time="2025-05-16T10:05:20.776349947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-7kxsr,Uid:27ccac50-d53c-4ece-9d32-1bfac8cd5d95,Namespace:calico-apiserver,Attempt:0,}" May 16 10:05:20.776904 containerd[1591]: time="2025-05-16T10:05:20.776493626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b996595-kz4qp,Uid:7187b294-e77c-457c-b05f-69fc94c6fde1,Namespace:calico-system,Attempt:0,}" May 16 10:05:20.834227 containerd[1591]: time="2025-05-16T10:05:20.834174842Z" level=error msg="Failed to destroy network for sandbox \"3c1bcb3798ca746a1688c5576f5fdd8eb41f102ec74465f0b6bcb5b10c829747\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:20.837169 systemd[1]: run-netns-cni\x2d0f71c675\x2de7e0\x2d4022\x2d187c\x2d89a4ff1ae33e.mount: Deactivated successfully. May 16 10:05:20.837709 containerd[1591]: time="2025-05-16T10:05:20.837663556Z" level=error msg="Failed to destroy network for sandbox \"3c7f99ff6ec873c051261bba6c6a8eecceb494a73c2d739d83eefce989cb87c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:20.841073 systemd[1]: run-netns-cni\x2dcef2b200\x2d1702\x2deeed\x2d4e50\x2d8c27aea56b57.mount: Deactivated successfully. May 16 10:05:20.841277 containerd[1591]: time="2025-05-16T10:05:20.841214837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b996595-kz4qp,Uid:7187b294-e77c-457c-b05f-69fc94c6fde1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c1bcb3798ca746a1688c5576f5fdd8eb41f102ec74465f0b6bcb5b10c829747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:20.841558 kubelet[2712]: E0516 10:05:20.841485 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c1bcb3798ca746a1688c5576f5fdd8eb41f102ec74465f0b6bcb5b10c829747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:20.842021 kubelet[2712]: E0516 10:05:20.841562 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c1bcb3798ca746a1688c5576f5fdd8eb41f102ec74465f0b6bcb5b10c829747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b996595-kz4qp" May 16 10:05:20.842021 kubelet[2712]: E0516 10:05:20.841582 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c1bcb3798ca746a1688c5576f5fdd8eb41f102ec74465f0b6bcb5b10c829747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b996595-kz4qp" May 16 10:05:20.842021 kubelet[2712]: E0516 10:05:20.841631 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57b996595-kz4qp_calico-system(7187b294-e77c-457c-b05f-69fc94c6fde1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57b996595-kz4qp_calico-system(7187b294-e77c-457c-b05f-69fc94c6fde1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c1bcb3798ca746a1688c5576f5fdd8eb41f102ec74465f0b6bcb5b10c829747\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57b996595-kz4qp" podUID="7187b294-e77c-457c-b05f-69fc94c6fde1" May 16 10:05:20.842211 containerd[1591]: time="2025-05-16T10:05:20.842180575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-7kxsr,Uid:27ccac50-d53c-4ece-9d32-1bfac8cd5d95,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7f99ff6ec873c051261bba6c6a8eecceb494a73c2d739d83eefce989cb87c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:20.842399 kubelet[2712]: E0516 10:05:20.842368 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7f99ff6ec873c051261bba6c6a8eecceb494a73c2d739d83eefce989cb87c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:20.842435 kubelet[2712]: E0516 10:05:20.842410 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7f99ff6ec873c051261bba6c6a8eecceb494a73c2d739d83eefce989cb87c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" May 16 10:05:20.842472 kubelet[2712]: E0516 10:05:20.842432 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7f99ff6ec873c051261bba6c6a8eecceb494a73c2d739d83eefce989cb87c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" May 16 10:05:20.842575 kubelet[2712]: E0516 10:05:20.842496 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86d67fd756-7kxsr_calico-apiserver(27ccac50-d53c-4ece-9d32-1bfac8cd5d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86d67fd756-7kxsr_calico-apiserver(27ccac50-d53c-4ece-9d32-1bfac8cd5d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c7f99ff6ec873c051261bba6c6a8eecceb494a73c2d739d83eefce989cb87c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" podUID="27ccac50-d53c-4ece-9d32-1bfac8cd5d95" May 16 10:05:21.776269 kubelet[2712]: E0516 10:05:21.776220 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:21.776406 kubelet[2712]: E0516 10:05:21.776326 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:21.776715 containerd[1591]: time="2025-05-16T10:05:21.776683924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtmhb,Uid:2a935d52-e822-4eeb-870a-53e15a71d983,Namespace:kube-system,Attempt:0,}" May 16 10:05:21.777361 containerd[1591]: time="2025-05-16T10:05:21.776698282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bw78z,Uid:02cb407d-a66d-415d-8bed-a4bbe798ffd0,Namespace:kube-system,Attempt:0,}" May 16 10:05:21.835243 containerd[1591]: time="2025-05-16T10:05:21.835184632Z" level=error msg="Failed to destroy network for sandbox \"01949d462d3f0d98c576eeb4bb5ddc1d7e77d264edc2d4a158f4856f4a351aa1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:21.837531 containerd[1591]: time="2025-05-16T10:05:21.837458464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtmhb,Uid:2a935d52-e822-4eeb-870a-53e15a71d983,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01949d462d3f0d98c576eeb4bb5ddc1d7e77d264edc2d4a158f4856f4a351aa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:21.837834 kubelet[2712]: E0516 10:05:21.837802 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01949d462d3f0d98c576eeb4bb5ddc1d7e77d264edc2d4a158f4856f4a351aa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:21.838026 kubelet[2712]: E0516 10:05:21.837957 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01949d462d3f0d98c576eeb4bb5ddc1d7e77d264edc2d4a158f4856f4a351aa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mtmhb" May 16 10:05:21.838073 kubelet[2712]: E0516 10:05:21.838028 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01949d462d3f0d98c576eeb4bb5ddc1d7e77d264edc2d4a158f4856f4a351aa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mtmhb" May 16 10:05:21.838389 kubelet[2712]: E0516 10:05:21.838312 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mtmhb_kube-system(2a935d52-e822-4eeb-870a-53e15a71d983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mtmhb_kube-system(2a935d52-e822-4eeb-870a-53e15a71d983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01949d462d3f0d98c576eeb4bb5ddc1d7e77d264edc2d4a158f4856f4a351aa1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mtmhb" podUID="2a935d52-e822-4eeb-870a-53e15a71d983" May 16 10:05:21.838390 systemd[1]: run-netns-cni\x2da28acbed\x2dc0c9\x2dfc44\x2d562e\x2d6927fa71700a.mount: Deactivated successfully. May 16 10:05:21.838680 containerd[1591]: time="2025-05-16T10:05:21.838608783Z" level=error msg="Failed to destroy network for sandbox \"ae2c46fd4581db3873acb28347625128c54c481ab25dcc5e3e9e54fceea1e9f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:21.839957 containerd[1591]: time="2025-05-16T10:05:21.839932159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bw78z,Uid:02cb407d-a66d-415d-8bed-a4bbe798ffd0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2c46fd4581db3873acb28347625128c54c481ab25dcc5e3e9e54fceea1e9f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:21.840231 kubelet[2712]: E0516 10:05:21.840192 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2c46fd4581db3873acb28347625128c54c481ab25dcc5e3e9e54fceea1e9f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:21.840535 kubelet[2712]: E0516 10:05:21.840332 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2c46fd4581db3873acb28347625128c54c481ab25dcc5e3e9e54fceea1e9f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bw78z" May 16 10:05:21.840535 kubelet[2712]: E0516 10:05:21.840358 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2c46fd4581db3873acb28347625128c54c481ab25dcc5e3e9e54fceea1e9f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bw78z" May 16 10:05:21.840535 kubelet[2712]: E0516 10:05:21.840421 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bw78z_kube-system(02cb407d-a66d-415d-8bed-a4bbe798ffd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bw78z_kube-system(02cb407d-a66d-415d-8bed-a4bbe798ffd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae2c46fd4581db3873acb28347625128c54c481ab25dcc5e3e9e54fceea1e9f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bw78z" podUID="02cb407d-a66d-415d-8bed-a4bbe798ffd0" May 16 10:05:21.841440 systemd[1]: run-netns-cni\x2d0226d44a\x2dd62d\x2d4c71\x2d12aa\x2df11c33c14166.mount: Deactivated successfully. May 16 10:05:22.776376 containerd[1591]: time="2025-05-16T10:05:22.776325661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-ft5g4,Uid:0b6655d4-90fa-4706-bd5a-d8806266de04,Namespace:calico-apiserver,Attempt:0,}" May 16 10:05:22.824969 containerd[1591]: time="2025-05-16T10:05:22.824911597Z" level=error msg="Failed to destroy network for sandbox \"80a0f59c2abd73befed8dabc34009d98bdbf53d33ee7666f57b60dc4a19e686c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:22.826279 containerd[1591]: time="2025-05-16T10:05:22.826230643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-ft5g4,Uid:0b6655d4-90fa-4706-bd5a-d8806266de04,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a0f59c2abd73befed8dabc34009d98bdbf53d33ee7666f57b60dc4a19e686c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:22.826536 kubelet[2712]: E0516 10:05:22.826470 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a0f59c2abd73befed8dabc34009d98bdbf53d33ee7666f57b60dc4a19e686c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:22.826819 kubelet[2712]: E0516 10:05:22.826561 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a0f59c2abd73befed8dabc34009d98bdbf53d33ee7666f57b60dc4a19e686c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" May 16 10:05:22.826819 kubelet[2712]: E0516 10:05:22.826587 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80a0f59c2abd73befed8dabc34009d98bdbf53d33ee7666f57b60dc4a19e686c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" May 16 10:05:22.826819 kubelet[2712]: E0516 10:05:22.826641 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86d67fd756-ft5g4_calico-apiserver(0b6655d4-90fa-4706-bd5a-d8806266de04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86d67fd756-ft5g4_calico-apiserver(0b6655d4-90fa-4706-bd5a-d8806266de04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80a0f59c2abd73befed8dabc34009d98bdbf53d33ee7666f57b60dc4a19e686c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" podUID="0b6655d4-90fa-4706-bd5a-d8806266de04" May 16 10:05:22.827663 systemd[1]: run-netns-cni\x2d7ba1555f\x2d3a6e\x2d9484\x2df1d7\x2d1675378750d5.mount: Deactivated successfully. May 16 10:05:23.371964 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:54602.service - OpenSSH per-connection server daemon (10.0.0.1:54602). May 16 10:05:23.424345 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 54602 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:23.425590 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:23.429697 systemd-logind[1576]: New session 16 of user core. May 16 10:05:23.440639 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 10:05:23.549158 sshd[4502]: Connection closed by 10.0.0.1 port 54602 May 16 10:05:23.549454 sshd-session[4500]: pam_unix(sshd:session): session closed for user core May 16 10:05:23.552198 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:54602.service: Deactivated successfully. May 16 10:05:23.554216 systemd[1]: session-16.scope: Deactivated successfully. May 16 10:05:23.555817 systemd-logind[1576]: Session 16 logged out. Waiting for processes to exit. May 16 10:05:23.557047 systemd-logind[1576]: Removed session 16. May 16 10:05:26.776280 kubelet[2712]: I0516 10:05:26.776231 2712 scope.go:117] "RemoveContainer" containerID="fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a" May 16 10:05:26.776790 kubelet[2712]: E0516 10:05:26.776316 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:26.779944 containerd[1591]: time="2025-05-16T10:05:26.779893841Z" level=info msg="CreateContainer within sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" May 16 10:05:26.788985 containerd[1591]: time="2025-05-16T10:05:26.788930349Z" level=info msg="Container 83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:26.799685 containerd[1591]: time="2025-05-16T10:05:26.799627196Z" level=info msg="CreateContainer within sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\"" May 16 10:05:26.800153 containerd[1591]: time="2025-05-16T10:05:26.800122481Z" level=info msg="StartContainer for \"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\"" May 16 10:05:26.801614 containerd[1591]: time="2025-05-16T10:05:26.801587137Z" level=info msg="connecting to shim 83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8" address="unix:///run/containerd/s/de0a19da887bcded7dee23c148b32c1349f51492d48b029a54bbe25a05229872" protocol=ttrpc version=3 May 16 10:05:26.831718 systemd[1]: Started cri-containerd-83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8.scope - libcontainer container 83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8. May 16 10:05:26.941633 systemd[1]: cri-containerd-83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8.scope: Deactivated successfully. May 16 10:05:26.943601 containerd[1591]: time="2025-05-16T10:05:26.943568079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\" id:\"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\" pid:4530 exit_status:1 exited_at:{seconds:1747389926 nanos:943230098}" May 16 10:05:26.965233 containerd[1591]: time="2025-05-16T10:05:26.965188363Z" level=info msg="received exit event container_id:\"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\" id:\"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\" pid:4530 exit_status:1 exited_at:{seconds:1747389926 nanos:943230098}" May 16 10:05:26.973561 containerd[1591]: time="2025-05-16T10:05:26.973505580Z" level=info msg="StartContainer for \"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\" returns successfully" May 16 10:05:26.986479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8-rootfs.mount: Deactivated successfully. May 16 10:05:27.010536 kubelet[2712]: E0516 10:05:27.010222 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:27.065212 kubelet[2712]: I0516 10:05:27.064782 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cm57r" podStartSLOduration=22.37249879 podStartE2EDuration="56.064763132s" podCreationTimestamp="2025-05-16 10:04:31 +0000 UTC" firstStartedPulling="2025-05-16 10:04:32.359702627 +0000 UTC m=+12.781846540" lastFinishedPulling="2025-05-16 10:05:06.051966969 +0000 UTC m=+46.474110882" observedRunningTime="2025-05-16 10:05:07.37295923 +0000 UTC m=+47.795103163" watchObservedRunningTime="2025-05-16 10:05:27.064763132 +0000 UTC m=+67.486907045" May 16 10:05:27.070126 containerd[1591]: time="2025-05-16T10:05:27.070059249Z" level=error msg="ExecSync for \"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"6bca73859136a353cdce2eb7b93323eef7df92dc2d753d97715f09b61e80e134\": task 83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8 not found" May 16 10:05:27.070366 kubelet[2712]: E0516 10:05:27.070305 2712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"6bca73859136a353cdce2eb7b93323eef7df92dc2d753d97715f09b61e80e134\": task 83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8 not found" containerID="83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 16 10:05:27.073996 containerd[1591]: time="2025-05-16T10:05:27.073946192Z" level=error msg="ExecSync for \"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" May 16 10:05:27.074106 kubelet[2712]: E0516 10:05:27.074078 2712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 16 10:05:27.074319 containerd[1591]: time="2025-05-16T10:05:27.074252257Z" level=error msg="ExecSync for \"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" May 16 10:05:27.074376 kubelet[2712]: E0516 10:05:27.074348 2712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 16 10:05:28.015310 kubelet[2712]: I0516 10:05:28.015206 2712 scope.go:117] "RemoveContainer" containerID="fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a" May 16 10:05:28.015783 kubelet[2712]: I0516 10:05:28.015498 2712 scope.go:117] "RemoveContainer" containerID="83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8" May 16 10:05:28.015783 kubelet[2712]: E0516 10:05:28.015575 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:28.015783 kubelet[2712]: E0516 10:05:28.015653 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-cm57r_calico-system(57d462f3-4a3d-42b3-9610-fcce09d094b9)\"" pod="calico-system/calico-node-cm57r" podUID="57d462f3-4a3d-42b3-9610-fcce09d094b9" May 16 10:05:28.017382 containerd[1591]: time="2025-05-16T10:05:28.017347975Z" level=info msg="RemoveContainer for \"fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a\"" May 16 10:05:28.087823 containerd[1591]: time="2025-05-16T10:05:28.087785509Z" level=info msg="RemoveContainer for \"fa2dd9b04e61ad78c0cc7207db1e7a6f28723151396926e365d9e386aae9c95a\" returns successfully" May 16 10:05:28.567698 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:57912.service - OpenSSH per-connection server daemon (10.0.0.1:57912). May 16 10:05:28.635166 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 57912 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:28.637097 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:28.641897 systemd-logind[1576]: New session 17 of user core. May 16 10:05:28.650697 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 10:05:28.771470 sshd[4568]: Connection closed by 10.0.0.1 port 57912 May 16 10:05:28.771808 sshd-session[4566]: pam_unix(sshd:session): session closed for user core May 16 10:05:28.774924 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:57912.service: Deactivated successfully. May 16 10:05:28.776823 systemd[1]: session-17.scope: Deactivated successfully. May 16 10:05:28.779421 systemd-logind[1576]: Session 17 logged out. Waiting for processes to exit. May 16 10:05:28.780502 systemd-logind[1576]: Removed session 17. May 16 10:05:29.021430 kubelet[2712]: I0516 10:05:29.021382 2712 scope.go:117] "RemoveContainer" containerID="83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8" May 16 10:05:29.021863 kubelet[2712]: E0516 10:05:29.021458 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:29.021863 kubelet[2712]: E0516 10:05:29.021558 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-cm57r_calico-system(57d462f3-4a3d-42b3-9610-fcce09d094b9)\"" pod="calico-system/calico-node-cm57r" podUID="57d462f3-4a3d-42b3-9610-fcce09d094b9" May 16 10:05:32.105330 containerd[1591]: time="2025-05-16T10:05:32.105286198Z" level=info msg="StopPodSandbox for \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\"" May 16 10:05:32.109841 containerd[1591]: time="2025-05-16T10:05:32.109806708Z" level=info msg="Container to stop \"de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 10:05:32.109887 containerd[1591]: time="2025-05-16T10:05:32.109841621Z" level=info msg="Container to stop \"0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 10:05:32.109887 containerd[1591]: time="2025-05-16T10:05:32.109851179Z" level=info msg="Container to stop \"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 10:05:32.116451 systemd[1]: cri-containerd-c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85.scope: Deactivated successfully. May 16 10:05:32.116792 containerd[1591]: time="2025-05-16T10:05:32.116748131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" id:\"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" pid:3281 exit_status:137 exited_at:{seconds:1747389932 nanos:116475139}" May 16 10:05:32.145270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85-rootfs.mount: Deactivated successfully. May 16 10:05:32.204456 containerd[1591]: time="2025-05-16T10:05:32.204406150Z" level=info msg="shim disconnected" id=c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85 namespace=k8s.io May 16 10:05:32.204697 containerd[1591]: time="2025-05-16T10:05:32.204476680Z" level=warning msg="cleaning up after shim disconnected" id=c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85 namespace=k8s.io May 16 10:05:32.207267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85-shm.mount: Deactivated successfully. May 16 10:05:32.252269 containerd[1591]: time="2025-05-16T10:05:32.204487821Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 10:05:32.252423 containerd[1591]: time="2025-05-16T10:05:32.242225952Z" level=info msg="TearDown network for sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" successfully" May 16 10:05:32.252423 containerd[1591]: time="2025-05-16T10:05:32.252357482Z" level=info msg="StopPodSandbox for \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" returns successfully" May 16 10:05:32.252505 containerd[1591]: time="2025-05-16T10:05:32.251121844Z" level=info msg="received exit event sandbox_id:\"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" exit_status:137 exited_at:{seconds:1747389932 nanos:116475139}" May 16 10:05:32.337046 kubelet[2712]: I0516 10:05:32.336980 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-flexvol-driver-host\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337046 kubelet[2712]: I0516 10:05:32.337032 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-policysync\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337046 kubelet[2712]: I0516 10:05:32.337052 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-bin-dir\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337684 kubelet[2712]: I0516 10:05:32.337068 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-log-dir\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337684 kubelet[2712]: I0516 10:05:32.337086 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-net-dir\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337684 kubelet[2712]: I0516 10:05:32.337114 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57d462f3-4a3d-42b3-9610-fcce09d094b9-tigera-ca-bundle\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337684 kubelet[2712]: I0516 10:05:32.337117 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-policysync" (OuterVolumeSpecName: "policysync") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 10:05:32.337684 kubelet[2712]: I0516 10:05:32.337136 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-xtables-lock\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337684 kubelet[2712]: I0516 10:05:32.337159 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 10:05:32.337852 kubelet[2712]: I0516 10:05:32.337117 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 10:05:32.337852 kubelet[2712]: I0516 10:05:32.337165 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-var-lib-calico\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337852 kubelet[2712]: I0516 10:05:32.337191 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 10:05:32.337852 kubelet[2712]: I0516 10:05:32.337193 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 10:05:32.337852 kubelet[2712]: I0516 10:05:32.337200 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-lib-modules\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337964 kubelet[2712]: I0516 10:05:32.337220 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 10:05:32.337964 kubelet[2712]: I0516 10:05:32.337225 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rgzb\" (UniqueName: \"kubernetes.io/projected/57d462f3-4a3d-42b3-9610-fcce09d094b9-kube-api-access-5rgzb\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337964 kubelet[2712]: I0516 10:05:32.337242 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 10:05:32.337964 kubelet[2712]: I0516 10:05:32.337247 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-var-run-calico\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.337964 kubelet[2712]: I0516 10:05:32.337260 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 10:05:32.338123 kubelet[2712]: I0516 10:05:32.337269 2712 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/57d462f3-4a3d-42b3-9610-fcce09d094b9-node-certs\") pod \"57d462f3-4a3d-42b3-9610-fcce09d094b9\" (UID: \"57d462f3-4a3d-42b3-9610-fcce09d094b9\") " May 16 10:05:32.338123 kubelet[2712]: I0516 10:05:32.337316 2712 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-policysync\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.338123 kubelet[2712]: I0516 10:05:32.337328 2712 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.338123 kubelet[2712]: I0516 10:05:32.337339 2712 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.338123 kubelet[2712]: I0516 10:05:32.337349 2712 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.338123 kubelet[2712]: I0516 10:05:32.337360 2712 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.338123 kubelet[2712]: I0516 10:05:32.337369 2712 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.338123 kubelet[2712]: I0516 10:05:32.337380 2712 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.338372 kubelet[2712]: I0516 10:05:32.337390 2712 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.338372 kubelet[2712]: I0516 10:05:32.337684 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 10:05:32.345703 kubelet[2712]: I0516 10:05:32.341167 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d462f3-4a3d-42b3-9610-fcce09d094b9-node-certs" (OuterVolumeSpecName: "node-certs") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 10:05:32.345703 kubelet[2712]: I0516 10:05:32.341204 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d462f3-4a3d-42b3-9610-fcce09d094b9-kube-api-access-5rgzb" (OuterVolumeSpecName: "kube-api-access-5rgzb") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "kube-api-access-5rgzb". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 10:05:32.345703 kubelet[2712]: I0516 10:05:32.343177 2712 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57d462f3-4a3d-42b3-9610-fcce09d094b9-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "57d462f3-4a3d-42b3-9610-fcce09d094b9" (UID: "57d462f3-4a3d-42b3-9610-fcce09d094b9"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 10:05:32.345036 systemd[1]: var-lib-kubelet-pods-57d462f3\x2d4a3d\x2d42b3\x2d9610\x2dfcce09d094b9-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 16 10:05:32.345183 systemd[1]: var-lib-kubelet-pods-57d462f3\x2d4a3d\x2d42b3\x2d9610\x2dfcce09d094b9-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 16 10:05:32.345283 systemd[1]: var-lib-kubelet-pods-57d462f3\x2d4a3d\x2d42b3\x2d9610\x2dfcce09d094b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5rgzb.mount: Deactivated successfully. May 16 10:05:32.365813 kubelet[2712]: I0516 10:05:32.365687 2712 memory_manager.go:355] "RemoveStaleState removing state" podUID="57d462f3-4a3d-42b3-9610-fcce09d094b9" containerName="calico-node" May 16 10:05:32.365813 kubelet[2712]: I0516 10:05:32.365728 2712 memory_manager.go:355] "RemoveStaleState removing state" podUID="57d462f3-4a3d-42b3-9610-fcce09d094b9" containerName="calico-node" May 16 10:05:32.365813 kubelet[2712]: I0516 10:05:32.365736 2712 memory_manager.go:355] "RemoveStaleState removing state" podUID="57d462f3-4a3d-42b3-9610-fcce09d094b9" containerName="calico-node" May 16 10:05:32.376567 systemd[1]: Created slice kubepods-besteffort-pod9fd712d9_10a8_4fa5_8e3a_835e4e700662.slice - libcontainer container kubepods-besteffort-pod9fd712d9_10a8_4fa5_8e3a_835e4e700662.slice. May 16 10:05:32.438761 kubelet[2712]: I0516 10:05:32.438715 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fd712d9-10a8-4fa5-8e3a-835e4e700662-xtables-lock\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.438761 kubelet[2712]: I0516 10:05:32.438762 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9fd712d9-10a8-4fa5-8e3a-835e4e700662-policysync\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439009 kubelet[2712]: I0516 10:05:32.438783 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9fd712d9-10a8-4fa5-8e3a-835e4e700662-cni-bin-dir\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439009 kubelet[2712]: I0516 10:05:32.438802 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9fd712d9-10a8-4fa5-8e3a-835e4e700662-cni-net-dir\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439009 kubelet[2712]: I0516 10:05:32.438822 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9fd712d9-10a8-4fa5-8e3a-835e4e700662-var-run-calico\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439009 kubelet[2712]: I0516 10:05:32.438840 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9fd712d9-10a8-4fa5-8e3a-835e4e700662-var-lib-calico\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439009 kubelet[2712]: I0516 10:05:32.438856 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd712d9-10a8-4fa5-8e3a-835e4e700662-tigera-ca-bundle\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439169 kubelet[2712]: I0516 10:05:32.438890 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7bf7\" (UniqueName: \"kubernetes.io/projected/9fd712d9-10a8-4fa5-8e3a-835e4e700662-kube-api-access-n7bf7\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439169 kubelet[2712]: I0516 10:05:32.438913 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fd712d9-10a8-4fa5-8e3a-835e4e700662-lib-modules\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439169 kubelet[2712]: I0516 10:05:32.438933 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9fd712d9-10a8-4fa5-8e3a-835e4e700662-node-certs\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439169 kubelet[2712]: I0516 10:05:32.438951 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9fd712d9-10a8-4fa5-8e3a-835e4e700662-flexvol-driver-host\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439169 kubelet[2712]: I0516 10:05:32.438981 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9fd712d9-10a8-4fa5-8e3a-835e4e700662-cni-log-dir\") pod \"calico-node-wm5bf\" (UID: \"9fd712d9-10a8-4fa5-8e3a-835e4e700662\") " pod="calico-system/calico-node-wm5bf" May 16 10:05:32.439169 kubelet[2712]: I0516 10:05:32.439026 2712 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57d462f3-4a3d-42b3-9610-fcce09d094b9-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.439324 kubelet[2712]: I0516 10:05:32.439040 2712 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5rgzb\" (UniqueName: \"kubernetes.io/projected/57d462f3-4a3d-42b3-9610-fcce09d094b9-kube-api-access-5rgzb\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.439324 kubelet[2712]: I0516 10:05:32.439054 2712 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/57d462f3-4a3d-42b3-9610-fcce09d094b9-node-certs\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.439324 kubelet[2712]: I0516 10:05:32.439064 2712 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/57d462f3-4a3d-42b3-9610-fcce09d094b9-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 16 10:05:32.682539 kubelet[2712]: E0516 10:05:32.682391 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:32.683535 containerd[1591]: time="2025-05-16T10:05:32.683471823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wm5bf,Uid:9fd712d9-10a8-4fa5-8e3a-835e4e700662,Namespace:calico-system,Attempt:0,}" May 16 10:05:32.705195 containerd[1591]: time="2025-05-16T10:05:32.705150551Z" level=info msg="connecting to shim bdf209e06b8e7bd655fa6f4a393ef2a2974c20f31da749ff05888583e0492ccd" address="unix:///run/containerd/s/fbaa2fc84ace00b3f8c2a5110f254b317ceac7aad6b89bc5703b359bcfe11882" namespace=k8s.io protocol=ttrpc version=3 May 16 10:05:32.731645 systemd[1]: Started cri-containerd-bdf209e06b8e7bd655fa6f4a393ef2a2974c20f31da749ff05888583e0492ccd.scope - libcontainer container bdf209e06b8e7bd655fa6f4a393ef2a2974c20f31da749ff05888583e0492ccd. May 16 10:05:32.762805 containerd[1591]: time="2025-05-16T10:05:32.762762931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wm5bf,Uid:9fd712d9-10a8-4fa5-8e3a-835e4e700662,Namespace:calico-system,Attempt:0,} returns sandbox id \"bdf209e06b8e7bd655fa6f4a393ef2a2974c20f31da749ff05888583e0492ccd\"" May 16 10:05:32.763661 kubelet[2712]: E0516 10:05:32.763626 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:32.787510 containerd[1591]: time="2025-05-16T10:05:32.787454832Z" level=info msg="CreateContainer within sandbox \"bdf209e06b8e7bd655fa6f4a393ef2a2974c20f31da749ff05888583e0492ccd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 10:05:32.797278 containerd[1591]: time="2025-05-16T10:05:32.797236309Z" level=info msg="Container f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:32.805849 containerd[1591]: time="2025-05-16T10:05:32.805803387Z" level=info msg="CreateContainer within sandbox \"bdf209e06b8e7bd655fa6f4a393ef2a2974c20f31da749ff05888583e0492ccd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f\"" May 16 10:05:32.806221 containerd[1591]: time="2025-05-16T10:05:32.806173216Z" level=info msg="StartContainer for \"f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f\"" May 16 10:05:32.807586 containerd[1591]: time="2025-05-16T10:05:32.807548662Z" level=info msg="connecting to shim f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f" address="unix:///run/containerd/s/fbaa2fc84ace00b3f8c2a5110f254b317ceac7aad6b89bc5703b359bcfe11882" protocol=ttrpc version=3 May 16 10:05:32.831674 systemd[1]: Started cri-containerd-f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f.scope - libcontainer container f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f. May 16 10:05:32.873577 containerd[1591]: time="2025-05-16T10:05:32.873535936Z" level=info msg="StartContainer for \"f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f\" returns successfully" May 16 10:05:32.895939 systemd[1]: cri-containerd-f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f.scope: Deactivated successfully. May 16 10:05:32.896359 systemd[1]: cri-containerd-f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f.scope: Consumed 44ms CPU time, 16.2M memory peak, 7.9M read from disk, 6.3M written to disk. May 16 10:05:32.898043 containerd[1591]: time="2025-05-16T10:05:32.898014454Z" level=info msg="received exit event container_id:\"f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f\" id:\"f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f\" pid:4679 exited_at:{seconds:1747389932 nanos:897784032}" May 16 10:05:32.898512 containerd[1591]: time="2025-05-16T10:05:32.898495167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f\" id:\"f73c8f738e4998b41d4ef5b2faff85470bb644de8262f4badc3674094fc44b2f\" pid:4679 exited_at:{seconds:1747389932 nanos:897784032}" May 16 10:05:33.033538 kubelet[2712]: I0516 10:05:33.033469 2712 scope.go:117] "RemoveContainer" containerID="83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8" May 16 10:05:33.036449 containerd[1591]: time="2025-05-16T10:05:33.036411304Z" level=info msg="RemoveContainer for \"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\"" May 16 10:05:33.036789 kubelet[2712]: E0516 10:05:33.036764 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:33.039813 containerd[1591]: time="2025-05-16T10:05:33.039701970Z" level=info msg="CreateContainer within sandbox \"bdf209e06b8e7bd655fa6f4a393ef2a2974c20f31da749ff05888583e0492ccd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 10:05:33.042949 systemd[1]: Removed slice kubepods-besteffort-pod57d462f3_4a3d_42b3_9610_fcce09d094b9.slice - libcontainer container kubepods-besteffort-pod57d462f3_4a3d_42b3_9610_fcce09d094b9.slice. May 16 10:05:33.043087 systemd[1]: kubepods-besteffort-pod57d462f3_4a3d_42b3_9610_fcce09d094b9.slice: Consumed 816ms CPU time, 162M memory peak, 28K read from disk, 160.4M written to disk. May 16 10:05:33.046346 containerd[1591]: time="2025-05-16T10:05:33.046312324Z" level=info msg="RemoveContainer for \"83ad986c750b49fa9a3e8cf94cc08f8293304d748e9df11e9e8fbb2e603f2ec8\" returns successfully" May 16 10:05:33.046696 kubelet[2712]: I0516 10:05:33.046677 2712 scope.go:117] "RemoveContainer" containerID="0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca" May 16 10:05:33.049205 containerd[1591]: time="2025-05-16T10:05:33.049177667Z" level=info msg="RemoveContainer for \"0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca\"" May 16 10:05:33.050563 containerd[1591]: time="2025-05-16T10:05:33.050533282Z" level=info msg="Container 7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:33.056547 containerd[1591]: time="2025-05-16T10:05:33.056113470Z" level=info msg="RemoveContainer for \"0faebb4dc901331f377a4e293c79248347528ba64632494cacba41751afb01ca\" returns successfully" May 16 10:05:33.056690 kubelet[2712]: I0516 10:05:33.056435 2712 scope.go:117] "RemoveContainer" containerID="de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def" May 16 10:05:33.061177 containerd[1591]: time="2025-05-16T10:05:33.061123169Z" level=info msg="RemoveContainer for \"de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def\"" May 16 10:05:33.072261 containerd[1591]: time="2025-05-16T10:05:33.072089901Z" level=info msg="RemoveContainer for \"de97fc8fe5f7e93030d48cdcc5770bb5cf9a4e86b2b119563c067c34e7e50def\" returns successfully" May 16 10:05:33.073025 containerd[1591]: time="2025-05-16T10:05:33.072977243Z" level=info msg="CreateContainer within sandbox \"bdf209e06b8e7bd655fa6f4a393ef2a2974c20f31da749ff05888583e0492ccd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a\"" May 16 10:05:33.073427 containerd[1591]: time="2025-05-16T10:05:33.073403147Z" level=info msg="StartContainer for \"7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a\"" May 16 10:05:33.074648 containerd[1591]: time="2025-05-16T10:05:33.074621289Z" level=info msg="connecting to shim 7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a" address="unix:///run/containerd/s/fbaa2fc84ace00b3f8c2a5110f254b317ceac7aad6b89bc5703b359bcfe11882" protocol=ttrpc version=3 May 16 10:05:33.098737 systemd[1]: Started cri-containerd-7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a.scope - libcontainer container 7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a. May 16 10:05:33.146707 containerd[1591]: time="2025-05-16T10:05:33.146664512Z" level=info msg="StartContainer for \"7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a\" returns successfully" May 16 10:05:33.526154 systemd[1]: cri-containerd-7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a.scope: Deactivated successfully. May 16 10:05:33.526486 systemd[1]: cri-containerd-7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a.scope: Consumed 675ms CPU time, 113.2M memory peak, 98M read from disk. May 16 10:05:33.526876 containerd[1591]: time="2025-05-16T10:05:33.526839740Z" level=info msg="received exit event container_id:\"7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a\" id:\"7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a\" pid:4728 exited_at:{seconds:1747389933 nanos:526498051}" May 16 10:05:33.527320 containerd[1591]: time="2025-05-16T10:05:33.527279989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a\" id:\"7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a\" pid:4728 exited_at:{seconds:1747389933 nanos:526498051}" May 16 10:05:33.553154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7679a524b2915e00c66d58cc0d66366162f578949f992f2791a3f3ab9e9b162a-rootfs.mount: Deactivated successfully. May 16 10:05:33.777401 kubelet[2712]: E0516 10:05:33.777132 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:33.778478 containerd[1591]: time="2025-05-16T10:05:33.778315435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtmhb,Uid:2a935d52-e822-4eeb-870a-53e15a71d983,Namespace:kube-system,Attempt:0,}" May 16 10:05:33.779651 containerd[1591]: time="2025-05-16T10:05:33.779616680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7jrt7,Uid:32e26eae-39a6-476b-8739-ed86db555147,Namespace:calico-system,Attempt:0,}" May 16 10:05:33.782455 kubelet[2712]: I0516 10:05:33.781945 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57d462f3-4a3d-42b3-9610-fcce09d094b9" path="/var/lib/kubelet/pods/57d462f3-4a3d-42b3-9610-fcce09d094b9/volumes" May 16 10:05:33.784297 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:57918.service - OpenSSH per-connection server daemon (10.0.0.1:57918). May 16 10:05:33.841262 sshd[4764]: Accepted publickey for core from 10.0.0.1 port 57918 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:33.843177 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:33.849118 systemd-logind[1576]: New session 18 of user core. May 16 10:05:33.855677 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 10:05:33.895663 containerd[1591]: time="2025-05-16T10:05:33.895610160Z" level=error msg="Failed to destroy network for sandbox \"ce6742103ef8def29efa53ae78e32a495edfd68673d118f78fe5bcb6f2d4faa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:33.898008 systemd[1]: run-netns-cni\x2db015c371\x2d12c4\x2d9a1a\x2d734d\x2de31054a19fbd.mount: Deactivated successfully. May 16 10:05:33.920938 containerd[1591]: time="2025-05-16T10:05:33.920747057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtmhb,Uid:2a935d52-e822-4eeb-870a-53e15a71d983,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce6742103ef8def29efa53ae78e32a495edfd68673d118f78fe5bcb6f2d4faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:33.921117 containerd[1591]: time="2025-05-16T10:05:33.920886594Z" level=error msg="Failed to destroy network for sandbox \"7b1f744fa78fd0aeba2e3e38278341fea120a015d9abcaed9188db94ae18901e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:33.921412 kubelet[2712]: E0516 10:05:33.921358 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce6742103ef8def29efa53ae78e32a495edfd68673d118f78fe5bcb6f2d4faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:33.921493 kubelet[2712]: E0516 10:05:33.921449 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce6742103ef8def29efa53ae78e32a495edfd68673d118f78fe5bcb6f2d4faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mtmhb" May 16 10:05:33.921493 kubelet[2712]: E0516 10:05:33.921478 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce6742103ef8def29efa53ae78e32a495edfd68673d118f78fe5bcb6f2d4faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mtmhb" May 16 10:05:33.921631 kubelet[2712]: E0516 10:05:33.921590 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mtmhb_kube-system(2a935d52-e822-4eeb-870a-53e15a71d983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mtmhb_kube-system(2a935d52-e822-4eeb-870a-53e15a71d983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce6742103ef8def29efa53ae78e32a495edfd68673d118f78fe5bcb6f2d4faa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mtmhb" podUID="2a935d52-e822-4eeb-870a-53e15a71d983" May 16 10:05:33.929552 containerd[1591]: time="2025-05-16T10:05:33.929471672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7jrt7,Uid:32e26eae-39a6-476b-8739-ed86db555147,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b1f744fa78fd0aeba2e3e38278341fea120a015d9abcaed9188db94ae18901e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:33.930583 kubelet[2712]: E0516 10:05:33.930544 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b1f744fa78fd0aeba2e3e38278341fea120a015d9abcaed9188db94ae18901e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 10:05:33.930653 kubelet[2712]: E0516 10:05:33.930610 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b1f744fa78fd0aeba2e3e38278341fea120a015d9abcaed9188db94ae18901e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7jrt7" May 16 10:05:33.930653 kubelet[2712]: E0516 10:05:33.930643 2712 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b1f744fa78fd0aeba2e3e38278341fea120a015d9abcaed9188db94ae18901e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7jrt7" May 16 10:05:33.930858 kubelet[2712]: E0516 10:05:33.930815 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7jrt7_calico-system(32e26eae-39a6-476b-8739-ed86db555147)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7jrt7_calico-system(32e26eae-39a6-476b-8739-ed86db555147)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b1f744fa78fd0aeba2e3e38278341fea120a015d9abcaed9188db94ae18901e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7jrt7" podUID="32e26eae-39a6-476b-8739-ed86db555147" May 16 10:05:33.973999 sshd[4781]: Connection closed by 10.0.0.1 port 57918 May 16 10:05:33.974333 sshd-session[4764]: pam_unix(sshd:session): session closed for user core May 16 10:05:33.978175 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:57918.service: Deactivated successfully. May 16 10:05:33.979968 systemd[1]: session-18.scope: Deactivated successfully. May 16 10:05:33.980897 systemd-logind[1576]: Session 18 logged out. Waiting for processes to exit. May 16 10:05:33.983171 systemd-logind[1576]: Removed session 18. May 16 10:05:34.042459 kubelet[2712]: E0516 10:05:34.042335 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:34.050152 containerd[1591]: time="2025-05-16T10:05:34.050115144Z" level=info msg="CreateContainer within sandbox \"bdf209e06b8e7bd655fa6f4a393ef2a2974c20f31da749ff05888583e0492ccd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 10:05:34.132813 containerd[1591]: time="2025-05-16T10:05:34.132761416Z" level=info msg="Container b18b92192dee10a4c915deae61d75784cd2fd53a797e1badfe9825977ea3ebb1: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:34.147062 systemd[1]: run-netns-cni\x2db9687eaf\x2da0e8\x2d39c5\x2de069\x2d74654efa90c8.mount: Deactivated successfully. May 16 10:05:34.192943 containerd[1591]: time="2025-05-16T10:05:34.192899794Z" level=info msg="CreateContainer within sandbox \"bdf209e06b8e7bd655fa6f4a393ef2a2974c20f31da749ff05888583e0492ccd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b18b92192dee10a4c915deae61d75784cd2fd53a797e1badfe9825977ea3ebb1\"" May 16 10:05:34.193409 containerd[1591]: time="2025-05-16T10:05:34.193374720Z" level=info msg="StartContainer for \"b18b92192dee10a4c915deae61d75784cd2fd53a797e1badfe9825977ea3ebb1\"" May 16 10:05:34.194717 containerd[1591]: time="2025-05-16T10:05:34.194693202Z" level=info msg="connecting to shim b18b92192dee10a4c915deae61d75784cd2fd53a797e1badfe9825977ea3ebb1" address="unix:///run/containerd/s/fbaa2fc84ace00b3f8c2a5110f254b317ceac7aad6b89bc5703b359bcfe11882" protocol=ttrpc version=3 May 16 10:05:34.217658 systemd[1]: Started cri-containerd-b18b92192dee10a4c915deae61d75784cd2fd53a797e1badfe9825977ea3ebb1.scope - libcontainer container b18b92192dee10a4c915deae61d75784cd2fd53a797e1badfe9825977ea3ebb1. May 16 10:05:34.278661 containerd[1591]: time="2025-05-16T10:05:34.278618372Z" level=info msg="StartContainer for \"b18b92192dee10a4c915deae61d75784cd2fd53a797e1badfe9825977ea3ebb1\" returns successfully" May 16 10:05:34.776752 containerd[1591]: time="2025-05-16T10:05:34.776710139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-7kxsr,Uid:27ccac50-d53c-4ece-9d32-1bfac8cd5d95,Namespace:calico-apiserver,Attempt:0,}" May 16 10:05:34.776910 containerd[1591]: time="2025-05-16T10:05:34.776729193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b996595-kz4qp,Uid:7187b294-e77c-457c-b05f-69fc94c6fde1,Namespace:calico-system,Attempt:0,}" May 16 10:05:34.777067 containerd[1591]: time="2025-05-16T10:05:34.776845337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-ft5g4,Uid:0b6655d4-90fa-4706-bd5a-d8806266de04,Namespace:calico-apiserver,Attempt:0,}" May 16 10:05:34.955422 systemd-networkd[1496]: cali0a5f14f0e65: Link UP May 16 10:05:34.956262 systemd-networkd[1496]: cali0a5f14f0e65: Gained carrier May 16 10:05:35.048119 kubelet[2712]: E0516 10:05:35.047717 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:35.127975 containerd[1591]: time="2025-05-16T10:05:35.127900753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b18b92192dee10a4c915deae61d75784cd2fd53a797e1badfe9825977ea3ebb1\" id:\"b111e3ba5aa88b3aa63fa097891eb9ead75b80c1caa13d1364c968c9df149db7\" pid:4998 exit_status:1 exited_at:{seconds:1747389935 nanos:127506074}" May 16 10:05:35.243416 containerd[1591]: 2025-05-16 10:05:34.817 [INFO][4908] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 10:05:35.243416 containerd[1591]: 2025-05-16 10:05:34.831 [INFO][4908] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0 calico-apiserver-86d67fd756- calico-apiserver 27ccac50-d53c-4ece-9d32-1bfac8cd5d95 817 0 2025-05-16 10:04:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86d67fd756 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86d67fd756-7kxsr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0a5f14f0e65 [] []}} ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-7kxsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-" May 16 10:05:35.243416 containerd[1591]: 2025-05-16 10:05:34.831 [INFO][4908] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-7kxsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0" May 16 10:05:35.243416 containerd[1591]: 2025-05-16 10:05:34.880 [INFO][4956] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" HandleID="k8s-pod-network.da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Workload="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0" May 16 10:05:35.243915 containerd[1591]: 2025-05-16 10:05:34.894 [INFO][4956] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" HandleID="k8s-pod-network.da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Workload="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dca30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86d67fd756-7kxsr", "timestamp":"2025-05-16 10:05:34.880306731 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 10:05:35.243915 containerd[1591]: 2025-05-16 10:05:34.895 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 10:05:35.243915 containerd[1591]: 2025-05-16 10:05:34.895 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 10:05:35.243915 containerd[1591]: 2025-05-16 10:05:34.895 [INFO][4956] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 10:05:35.243915 containerd[1591]: 2025-05-16 10:05:34.899 [INFO][4956] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" host="localhost" May 16 10:05:35.243915 containerd[1591]: 2025-05-16 10:05:34.903 [INFO][4956] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 10:05:35.243915 containerd[1591]: 2025-05-16 10:05:34.907 [INFO][4956] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 10:05:35.243915 containerd[1591]: 2025-05-16 10:05:34.919 [INFO][4956] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 10:05:35.243915 containerd[1591]: 2025-05-16 10:05:34.926 [INFO][4956] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 10:05:35.243915 containerd[1591]: 2025-05-16 10:05:34.926 [INFO][4956] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" host="localhost" May 16 10:05:35.244139 containerd[1591]: 2025-05-16 10:05:34.927 [INFO][4956] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a May 16 10:05:35.244139 containerd[1591]: 2025-05-16 10:05:34.934 [INFO][4956] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" host="localhost" May 16 10:05:35.244139 containerd[1591]: 2025-05-16 10:05:34.943 [INFO][4956] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" host="localhost" May 16 10:05:35.244139 containerd[1591]: 2025-05-16 10:05:34.943 [INFO][4956] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" host="localhost" May 16 10:05:35.244139 containerd[1591]: 2025-05-16 10:05:34.943 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 10:05:35.244139 containerd[1591]: 2025-05-16 10:05:34.943 [INFO][4956] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" HandleID="k8s-pod-network.da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Workload="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0" May 16 10:05:35.244262 containerd[1591]: 2025-05-16 10:05:34.947 [INFO][4908] cni-plugin/k8s.go 386: Populated endpoint ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-7kxsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0", GenerateName:"calico-apiserver-86d67fd756-", Namespace:"calico-apiserver", SelfLink:"", UID:"27ccac50-d53c-4ece-9d32-1bfac8cd5d95", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86d67fd756", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86d67fd756-7kxsr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a5f14f0e65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:35.244383 containerd[1591]: 2025-05-16 10:05:34.947 [INFO][4908] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-7kxsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0" May 16 10:05:35.244383 containerd[1591]: 2025-05-16 10:05:34.947 [INFO][4908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a5f14f0e65 ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-7kxsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0" May 16 10:05:35.244383 containerd[1591]: 2025-05-16 10:05:34.957 [INFO][4908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-7kxsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0" May 16 10:05:35.244448 containerd[1591]: 2025-05-16 10:05:34.958 [INFO][4908] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-7kxsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0", GenerateName:"calico-apiserver-86d67fd756-", Namespace:"calico-apiserver", SelfLink:"", UID:"27ccac50-d53c-4ece-9d32-1bfac8cd5d95", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86d67fd756", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a", Pod:"calico-apiserver-86d67fd756-7kxsr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a5f14f0e65", MAC:"3e:0a:55:99:09:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:35.244501 containerd[1591]: 2025-05-16 10:05:35.241 [INFO][4908] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-7kxsr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--7kxsr-eth0" May 16 10:05:35.582790 kubelet[2712]: I0516 10:05:35.582721 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wm5bf" podStartSLOduration=3.582702793 podStartE2EDuration="3.582702793s" podCreationTimestamp="2025-05-16 10:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 10:05:35.58231151 +0000 UTC m=+76.004455423" watchObservedRunningTime="2025-05-16 10:05:35.582702793 +0000 UTC m=+76.004846696" May 16 10:05:35.678299 systemd-networkd[1496]: cali5ee5b98361a: Link UP May 16 10:05:35.682545 systemd-networkd[1496]: cali5ee5b98361a: Gained carrier May 16 10:05:35.704886 containerd[1591]: time="2025-05-16T10:05:35.704831943Z" level=info msg="connecting to shim da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a" address="unix:///run/containerd/s/16c1d00f3a38b4f1160be4e67de8203cd99467ff8a6ce7f7d9728207eed4cce0" namespace=k8s.io protocol=ttrpc version=3 May 16 10:05:35.725665 containerd[1591]: 2025-05-16 10:05:34.822 [INFO][4932] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 10:05:35.725665 containerd[1591]: 2025-05-16 10:05:34.835 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0 calico-apiserver-86d67fd756- calico-apiserver 0b6655d4-90fa-4706-bd5a-d8806266de04 818 0 2025-05-16 10:04:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86d67fd756 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86d67fd756-ft5g4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ee5b98361a [] []}} ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-ft5g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-" May 16 10:05:35.725665 containerd[1591]: 2025-05-16 10:05:34.835 [INFO][4932] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-ft5g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0" May 16 10:05:35.725665 containerd[1591]: 2025-05-16 10:05:34.881 [INFO][4957] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" HandleID="k8s-pod-network.9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Workload="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0" May 16 10:05:35.725910 containerd[1591]: 2025-05-16 10:05:34.895 [INFO][4957] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" HandleID="k8s-pod-network.9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Workload="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86d67fd756-ft5g4", "timestamp":"2025-05-16 10:05:34.881017732 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 10:05:35.725910 containerd[1591]: 2025-05-16 10:05:34.895 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 10:05:35.725910 containerd[1591]: 2025-05-16 10:05:34.943 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 10:05:35.725910 containerd[1591]: 2025-05-16 10:05:34.943 [INFO][4957] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 10:05:35.725910 containerd[1591]: 2025-05-16 10:05:34.993 [INFO][4957] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" host="localhost" May 16 10:05:35.725910 containerd[1591]: 2025-05-16 10:05:35.581 [INFO][4957] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 10:05:35.725910 containerd[1591]: 2025-05-16 10:05:35.590 [INFO][4957] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 10:05:35.725910 containerd[1591]: 2025-05-16 10:05:35.594 [INFO][4957] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 10:05:35.725910 containerd[1591]: 2025-05-16 10:05:35.599 [INFO][4957] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 10:05:35.725910 containerd[1591]: 2025-05-16 10:05:35.600 [INFO][4957] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" host="localhost" May 16 10:05:35.726140 containerd[1591]: 2025-05-16 10:05:35.602 [INFO][4957] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c May 16 10:05:35.726140 containerd[1591]: 2025-05-16 10:05:35.634 [INFO][4957] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" host="localhost" May 16 10:05:35.726140 containerd[1591]: 2025-05-16 10:05:35.656 [INFO][4957] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" host="localhost" May 16 10:05:35.726140 containerd[1591]: 2025-05-16 10:05:35.657 [INFO][4957] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" host="localhost" May 16 10:05:35.726140 containerd[1591]: 2025-05-16 10:05:35.657 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 10:05:35.726140 containerd[1591]: 2025-05-16 10:05:35.657 [INFO][4957] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" HandleID="k8s-pod-network.9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Workload="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0" May 16 10:05:35.726297 containerd[1591]: 2025-05-16 10:05:35.668 [INFO][4932] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-ft5g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0", GenerateName:"calico-apiserver-86d67fd756-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b6655d4-90fa-4706-bd5a-d8806266de04", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86d67fd756", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86d67fd756-ft5g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ee5b98361a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:35.726350 containerd[1591]: 2025-05-16 10:05:35.668 [INFO][4932] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-ft5g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0" May 16 10:05:35.726350 containerd[1591]: 2025-05-16 10:05:35.668 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ee5b98361a ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-ft5g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0" May 16 10:05:35.726350 containerd[1591]: 2025-05-16 10:05:35.680 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-ft5g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0" May 16 10:05:35.726413 containerd[1591]: 2025-05-16 10:05:35.684 [INFO][4932] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-ft5g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0", GenerateName:"calico-apiserver-86d67fd756-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b6655d4-90fa-4706-bd5a-d8806266de04", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86d67fd756", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c", Pod:"calico-apiserver-86d67fd756-ft5g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ee5b98361a", MAC:"de:a3:8b:f2:10:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:35.726469 containerd[1591]: 2025-05-16 10:05:35.719 [INFO][4932] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" Namespace="calico-apiserver" Pod="calico-apiserver-86d67fd756-ft5g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86d67fd756--ft5g4-eth0" May 16 10:05:35.772720 systemd[1]: Started cri-containerd-da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a.scope - libcontainer container da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a. May 16 10:05:35.776706 kubelet[2712]: E0516 10:05:35.776168 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:35.777025 containerd[1591]: time="2025-05-16T10:05:35.776983351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bw78z,Uid:02cb407d-a66d-415d-8bed-a4bbe798ffd0,Namespace:kube-system,Attempt:0,}" May 16 10:05:35.779411 systemd-networkd[1496]: calib7a43286cfe: Link UP May 16 10:05:35.781023 systemd-networkd[1496]: calib7a43286cfe: Gained carrier May 16 10:05:35.797747 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 10:05:36.030435 containerd[1591]: 2025-05-16 10:05:34.820 [INFO][4920] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 10:05:36.030435 containerd[1591]: 2025-05-16 10:05:34.831 [INFO][4920] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0 calico-kube-controllers-57b996595- calico-system 7187b294-e77c-457c-b05f-69fc94c6fde1 815 0 2025-05-16 10:04:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57b996595 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-57b996595-kz4qp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib7a43286cfe [] []}} ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Namespace="calico-system" Pod="calico-kube-controllers-57b996595-kz4qp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-" May 16 10:05:36.030435 containerd[1591]: 2025-05-16 10:05:34.832 [INFO][4920] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Namespace="calico-system" Pod="calico-kube-controllers-57b996595-kz4qp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0" May 16 10:05:36.030435 containerd[1591]: 2025-05-16 10:05:34.897 [INFO][4964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" HandleID="k8s-pod-network.cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Workload="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0" May 16 10:05:36.030761 containerd[1591]: 2025-05-16 10:05:34.905 [INFO][4964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" HandleID="k8s-pod-network.cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Workload="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000502f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-57b996595-kz4qp", "timestamp":"2025-05-16 10:05:34.897331581 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 10:05:36.030761 containerd[1591]: 2025-05-16 10:05:34.905 [INFO][4964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 10:05:36.030761 containerd[1591]: 2025-05-16 10:05:35.660 [INFO][4964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 10:05:36.030761 containerd[1591]: 2025-05-16 10:05:35.660 [INFO][4964] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 10:05:36.030761 containerd[1591]: 2025-05-16 10:05:35.670 [INFO][4964] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" host="localhost" May 16 10:05:36.030761 containerd[1591]: 2025-05-16 10:05:35.681 [INFO][4964] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 10:05:36.030761 containerd[1591]: 2025-05-16 10:05:35.699 [INFO][4964] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 10:05:36.030761 containerd[1591]: 2025-05-16 10:05:35.716 [INFO][4964] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 10:05:36.030761 containerd[1591]: 2025-05-16 10:05:35.731 [INFO][4964] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 10:05:36.030761 containerd[1591]: 2025-05-16 10:05:35.731 [INFO][4964] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" host="localhost" May 16 10:05:36.031087 containerd[1591]: 2025-05-16 10:05:35.742 [INFO][4964] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698 May 16 10:05:36.031087 containerd[1591]: 2025-05-16 10:05:35.753 [INFO][4964] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" host="localhost" May 16 10:05:36.031087 containerd[1591]: 2025-05-16 10:05:35.765 [INFO][4964] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" host="localhost" May 16 10:05:36.031087 containerd[1591]: 2025-05-16 10:05:35.765 [INFO][4964] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" host="localhost" May 16 10:05:36.031087 containerd[1591]: 2025-05-16 10:05:35.765 [INFO][4964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 10:05:36.031087 containerd[1591]: 2025-05-16 10:05:35.765 [INFO][4964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" HandleID="k8s-pod-network.cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Workload="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0" May 16 10:05:36.031248 containerd[1591]: 2025-05-16 10:05:35.772 [INFO][4920] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Namespace="calico-system" Pod="calico-kube-controllers-57b996595-kz4qp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0", GenerateName:"calico-kube-controllers-57b996595-", Namespace:"calico-system", SelfLink:"", UID:"7187b294-e77c-457c-b05f-69fc94c6fde1", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57b996595", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-57b996595-kz4qp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib7a43286cfe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:36.031316 containerd[1591]: 2025-05-16 10:05:35.772 [INFO][4920] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Namespace="calico-system" Pod="calico-kube-controllers-57b996595-kz4qp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0" May 16 10:05:36.031316 containerd[1591]: 2025-05-16 10:05:35.772 [INFO][4920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7a43286cfe ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Namespace="calico-system" Pod="calico-kube-controllers-57b996595-kz4qp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0" May 16 10:05:36.031316 containerd[1591]: 2025-05-16 10:05:35.781 [INFO][4920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Namespace="calico-system" Pod="calico-kube-controllers-57b996595-kz4qp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0" May 16 10:05:36.031403 containerd[1591]: 2025-05-16 10:05:35.781 [INFO][4920] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Namespace="calico-system" Pod="calico-kube-controllers-57b996595-kz4qp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0", GenerateName:"calico-kube-controllers-57b996595-", Namespace:"calico-system", SelfLink:"", UID:"7187b294-e77c-457c-b05f-69fc94c6fde1", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57b996595", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698", Pod:"calico-kube-controllers-57b996595-kz4qp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib7a43286cfe", MAC:"3e:fc:8b:25:c7:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:36.031468 containerd[1591]: 2025-05-16 10:05:36.027 [INFO][4920] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" Namespace="calico-system" Pod="calico-kube-controllers-57b996595-kz4qp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b996595--kz4qp-eth0" May 16 10:05:36.041448 containerd[1591]: time="2025-05-16T10:05:36.041389551Z" level=info msg="connecting to shim 9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c" address="unix:///run/containerd/s/8b9ac3697be48c3b5209a114f673dd898107c6f7ccb952e5a27bd8f27f3338ac" namespace=k8s.io protocol=ttrpc version=3 May 16 10:05:36.051410 kubelet[2712]: E0516 10:05:36.051384 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:36.072924 containerd[1591]: time="2025-05-16T10:05:36.072835511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-7kxsr,Uid:27ccac50-d53c-4ece-9d32-1bfac8cd5d95,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a\"" May 16 10:05:36.073741 systemd[1]: Started cri-containerd-9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c.scope - libcontainer container 9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c. May 16 10:05:36.074185 containerd[1591]: time="2025-05-16T10:05:36.074113939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 16 10:05:36.091705 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 10:05:36.132546 containerd[1591]: time="2025-05-16T10:05:36.132406790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b18b92192dee10a4c915deae61d75784cd2fd53a797e1badfe9825977ea3ebb1\" id:\"0bb7673f689c0a03287e905915462174042591a9e1879f1c05aae7ba962801cf\" pid:5254 exit_status:1 exited_at:{seconds:1747389936 nanos:132101515}" May 16 10:05:36.485041 containerd[1591]: time="2025-05-16T10:05:36.484495935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86d67fd756-ft5g4,Uid:0b6655d4-90fa-4706-bd5a-d8806266de04,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c\"" May 16 10:05:36.556301 containerd[1591]: time="2025-05-16T10:05:36.556238560Z" level=info msg="connecting to shim cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698" address="unix:///run/containerd/s/61d314eae7450cc661d1a509173bbef9268299b8524396ead8377fcb3b6fdd49" namespace=k8s.io protocol=ttrpc version=3 May 16 10:05:36.603907 systemd[1]: Started cri-containerd-cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698.scope - libcontainer container cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698. May 16 10:05:36.623621 systemd-networkd[1496]: cali184bfd02e5d: Link UP May 16 10:05:36.623896 systemd-networkd[1496]: cali184bfd02e5d: Gained carrier May 16 10:05:36.634436 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 10:05:36.642380 containerd[1591]: 2025-05-16 10:05:36.511 [INFO][5292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--bw78z-eth0 coredns-668d6bf9bc- kube-system 02cb407d-a66d-415d-8bed-a4bbe798ffd0 812 0 2025-05-16 10:04:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-bw78z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali184bfd02e5d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Namespace="kube-system" Pod="coredns-668d6bf9bc-bw78z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bw78z-" May 16 10:05:36.642380 containerd[1591]: 2025-05-16 10:05:36.512 [INFO][5292] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Namespace="kube-system" Pod="coredns-668d6bf9bc-bw78z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bw78z-eth0" May 16 10:05:36.642380 containerd[1591]: 2025-05-16 10:05:36.554 [INFO][5307] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" HandleID="k8s-pod-network.d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Workload="localhost-k8s-coredns--668d6bf9bc--bw78z-eth0" May 16 10:05:36.642714 containerd[1591]: 2025-05-16 10:05:36.570 [INFO][5307] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" HandleID="k8s-pod-network.d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Workload="localhost-k8s-coredns--668d6bf9bc--bw78z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dc810), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-bw78z", "timestamp":"2025-05-16 10:05:36.554908476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 10:05:36.642714 containerd[1591]: 2025-05-16 10:05:36.570 [INFO][5307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 10:05:36.642714 containerd[1591]: 2025-05-16 10:05:36.570 [INFO][5307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 10:05:36.642714 containerd[1591]: 2025-05-16 10:05:36.570 [INFO][5307] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 10:05:36.642714 containerd[1591]: 2025-05-16 10:05:36.572 [INFO][5307] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" host="localhost" May 16 10:05:36.642714 containerd[1591]: 2025-05-16 10:05:36.578 [INFO][5307] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 10:05:36.642714 containerd[1591]: 2025-05-16 10:05:36.584 [INFO][5307] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 10:05:36.642714 containerd[1591]: 2025-05-16 10:05:36.586 [INFO][5307] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 10:05:36.642714 containerd[1591]: 2025-05-16 10:05:36.591 [INFO][5307] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 10:05:36.642714 containerd[1591]: 2025-05-16 10:05:36.591 [INFO][5307] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" host="localhost" May 16 10:05:36.642954 containerd[1591]: 2025-05-16 10:05:36.593 [INFO][5307] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe May 16 10:05:36.642954 containerd[1591]: 2025-05-16 10:05:36.599 [INFO][5307] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" host="localhost" May 16 10:05:36.642954 containerd[1591]: 2025-05-16 10:05:36.608 [INFO][5307] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" host="localhost" May 16 10:05:36.642954 containerd[1591]: 2025-05-16 10:05:36.608 [INFO][5307] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" host="localhost" May 16 10:05:36.642954 containerd[1591]: 2025-05-16 10:05:36.608 [INFO][5307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 10:05:36.642954 containerd[1591]: 2025-05-16 10:05:36.609 [INFO][5307] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" HandleID="k8s-pod-network.d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Workload="localhost-k8s-coredns--668d6bf9bc--bw78z-eth0" May 16 10:05:36.643079 containerd[1591]: 2025-05-16 10:05:36.615 [INFO][5292] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Namespace="kube-system" Pod="coredns-668d6bf9bc-bw78z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bw78z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bw78z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02cb407d-a66d-415d-8bed-a4bbe798ffd0", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-bw78z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali184bfd02e5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:36.643136 containerd[1591]: 2025-05-16 10:05:36.616 [INFO][5292] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Namespace="kube-system" Pod="coredns-668d6bf9bc-bw78z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bw78z-eth0" May 16 10:05:36.643136 containerd[1591]: 2025-05-16 10:05:36.616 [INFO][5292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali184bfd02e5d ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Namespace="kube-system" Pod="coredns-668d6bf9bc-bw78z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bw78z-eth0" May 16 10:05:36.643136 containerd[1591]: 2025-05-16 10:05:36.624 [INFO][5292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Namespace="kube-system" Pod="coredns-668d6bf9bc-bw78z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bw78z-eth0" May 16 10:05:36.643206 containerd[1591]: 2025-05-16 10:05:36.625 [INFO][5292] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Namespace="kube-system" Pod="coredns-668d6bf9bc-bw78z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bw78z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bw78z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02cb407d-a66d-415d-8bed-a4bbe798ffd0", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe", Pod:"coredns-668d6bf9bc-bw78z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali184bfd02e5d", MAC:"ca:4d:a2:54:36:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:36.643206 containerd[1591]: 2025-05-16 10:05:36.637 [INFO][5292] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" Namespace="kube-system" Pod="coredns-668d6bf9bc-bw78z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bw78z-eth0" May 16 10:05:36.698281 containerd[1591]: time="2025-05-16T10:05:36.698140758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b996595-kz4qp,Uid:7187b294-e77c-457c-b05f-69fc94c6fde1,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698\"" May 16 10:05:36.699918 systemd-networkd[1496]: cali0a5f14f0e65: Gained IPv6LL May 16 10:05:36.704935 systemd-networkd[1496]: vxlan.calico: Link UP May 16 10:05:36.705338 systemd-networkd[1496]: vxlan.calico: Gained carrier May 16 10:05:36.738602 containerd[1591]: time="2025-05-16T10:05:36.737796656Z" level=info msg="connecting to shim d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe" address="unix:///run/containerd/s/aa3404e59d498060bd4bcb811b349870a499ea01be02c866d8853f067340f202" namespace=k8s.io protocol=ttrpc version=3 May 16 10:05:36.777888 systemd[1]: Started cri-containerd-d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe.scope - libcontainer container d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe. May 16 10:05:36.796463 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 10:05:36.882305 containerd[1591]: time="2025-05-16T10:05:36.882264927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bw78z,Uid:02cb407d-a66d-415d-8bed-a4bbe798ffd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe\"" May 16 10:05:36.883282 kubelet[2712]: E0516 10:05:36.883140 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:36.886166 containerd[1591]: time="2025-05-16T10:05:36.886134795Z" level=info msg="CreateContainer within sandbox \"d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 10:05:37.122845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021612710.mount: Deactivated successfully. May 16 10:05:37.123830 containerd[1591]: time="2025-05-16T10:05:37.123769202Z" level=info msg="Container 4a773b1fa1d145c5c61a794e2bbd6abd8e11f77a98a0e3432db9ca457ffab662: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:37.219844 containerd[1591]: time="2025-05-16T10:05:37.219778764Z" level=info msg="CreateContainer within sandbox \"d7157d30493fd594f6c243a7af10095a0e0be8e2b2de2dc5cfbca34785a95fbe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a773b1fa1d145c5c61a794e2bbd6abd8e11f77a98a0e3432db9ca457ffab662\"" May 16 10:05:37.220377 containerd[1591]: time="2025-05-16T10:05:37.220307386Z" level=info msg="StartContainer for \"4a773b1fa1d145c5c61a794e2bbd6abd8e11f77a98a0e3432db9ca457ffab662\"" May 16 10:05:37.221414 containerd[1591]: time="2025-05-16T10:05:37.221386950Z" level=info msg="connecting to shim 4a773b1fa1d145c5c61a794e2bbd6abd8e11f77a98a0e3432db9ca457ffab662" address="unix:///run/containerd/s/aa3404e59d498060bd4bcb811b349870a499ea01be02c866d8853f067340f202" protocol=ttrpc version=3 May 16 10:05:37.246663 systemd[1]: Started cri-containerd-4a773b1fa1d145c5c61a794e2bbd6abd8e11f77a98a0e3432db9ca457ffab662.scope - libcontainer container 4a773b1fa1d145c5c61a794e2bbd6abd8e11f77a98a0e3432db9ca457ffab662. May 16 10:05:37.338705 systemd-networkd[1496]: cali5ee5b98361a: Gained IPv6LL May 16 10:05:37.400719 containerd[1591]: time="2025-05-16T10:05:37.400573044Z" level=info msg="StartContainer for \"4a773b1fa1d145c5c61a794e2bbd6abd8e11f77a98a0e3432db9ca457ffab662\" returns successfully" May 16 10:05:37.465843 systemd-networkd[1496]: calib7a43286cfe: Gained IPv6LL May 16 10:05:37.663922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707021427.mount: Deactivated successfully. May 16 10:05:37.849742 systemd-networkd[1496]: vxlan.calico: Gained IPv6LL May 16 10:05:37.913737 systemd-networkd[1496]: cali184bfd02e5d: Gained IPv6LL May 16 10:05:38.073072 kubelet[2712]: E0516 10:05:38.072757 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:38.207393 kubelet[2712]: I0516 10:05:38.207310 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bw78z" podStartSLOduration=73.207229292 podStartE2EDuration="1m13.207229292s" podCreationTimestamp="2025-05-16 10:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 10:05:38.178284087 +0000 UTC m=+78.600428000" watchObservedRunningTime="2025-05-16 10:05:38.207229292 +0000 UTC m=+78.629373215" May 16 10:05:38.776204 kubelet[2712]: E0516 10:05:38.776158 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:38.990460 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:58276.service - OpenSSH per-connection server daemon (10.0.0.1:58276). May 16 10:05:39.073856 kubelet[2712]: E0516 10:05:39.073777 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:39.342928 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 58276 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:39.344776 sshd-session[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:39.349675 systemd-logind[1576]: New session 19 of user core. May 16 10:05:39.358770 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 10:05:39.777230 kubelet[2712]: E0516 10:05:39.777187 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:40.075658 kubelet[2712]: E0516 10:05:40.075549 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:40.199278 sshd[5535]: Connection closed by 10.0.0.1 port 58276 May 16 10:05:40.199570 sshd-session[5532]: pam_unix(sshd:session): session closed for user core May 16 10:05:40.203670 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:58276.service: Deactivated successfully. May 16 10:05:40.205886 systemd[1]: session-19.scope: Deactivated successfully. May 16 10:05:40.206716 systemd-logind[1576]: Session 19 logged out. Waiting for processes to exit. May 16 10:05:40.207934 systemd-logind[1576]: Removed session 19. May 16 10:05:44.188959 containerd[1591]: time="2025-05-16T10:05:44.188900829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:44.189890 containerd[1591]: time="2025-05-16T10:05:44.189858580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 16 10:05:44.191063 containerd[1591]: time="2025-05-16T10:05:44.191002990Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:44.193323 containerd[1591]: time="2025-05-16T10:05:44.193293777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:44.194034 containerd[1591]: time="2025-05-16T10:05:44.193990577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 8.119852853s" May 16 10:05:44.194084 containerd[1591]: time="2025-05-16T10:05:44.194032927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 16 10:05:44.195369 containerd[1591]: time="2025-05-16T10:05:44.195099611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 16 10:05:44.196546 containerd[1591]: time="2025-05-16T10:05:44.196379587Z" level=info msg="CreateContainer within sandbox \"da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 10:05:44.205833 containerd[1591]: time="2025-05-16T10:05:44.205783043Z" level=info msg="Container 3ddaf083229fccd0d4e84bc150714e16685dc2b12914bbbf8f19a014c1843388: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:44.213578 containerd[1591]: time="2025-05-16T10:05:44.213489840Z" level=info msg="CreateContainer within sandbox \"da7185a7f0d916a1a2782ebd9cccf873108415c49ac2073614ba21813d70136a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3ddaf083229fccd0d4e84bc150714e16685dc2b12914bbbf8f19a014c1843388\"" May 16 10:05:44.214126 containerd[1591]: time="2025-05-16T10:05:44.214087995Z" level=info msg="StartContainer for \"3ddaf083229fccd0d4e84bc150714e16685dc2b12914bbbf8f19a014c1843388\"" May 16 10:05:44.215203 containerd[1591]: time="2025-05-16T10:05:44.215173275Z" level=info msg="connecting to shim 3ddaf083229fccd0d4e84bc150714e16685dc2b12914bbbf8f19a014c1843388" address="unix:///run/containerd/s/16c1d00f3a38b4f1160be4e67de8203cd99467ff8a6ce7f7d9728207eed4cce0" protocol=ttrpc version=3 May 16 10:05:44.238734 systemd[1]: Started cri-containerd-3ddaf083229fccd0d4e84bc150714e16685dc2b12914bbbf8f19a014c1843388.scope - libcontainer container 3ddaf083229fccd0d4e84bc150714e16685dc2b12914bbbf8f19a014c1843388. May 16 10:05:44.298952 containerd[1591]: time="2025-05-16T10:05:44.298891183Z" level=info msg="StartContainer for \"3ddaf083229fccd0d4e84bc150714e16685dc2b12914bbbf8f19a014c1843388\" returns successfully" May 16 10:05:45.138107 kubelet[2712]: I0516 10:05:45.137907 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86d67fd756-7kxsr" podStartSLOduration=66.01692522 podStartE2EDuration="1m14.137890499s" podCreationTimestamp="2025-05-16 10:04:31 +0000 UTC" firstStartedPulling="2025-05-16 10:05:36.073943233 +0000 UTC m=+76.496087146" lastFinishedPulling="2025-05-16 10:05:44.194908512 +0000 UTC m=+84.617052425" observedRunningTime="2025-05-16 10:05:45.136299303 +0000 UTC m=+85.558443226" watchObservedRunningTime="2025-05-16 10:05:45.137890499 +0000 UTC m=+85.560034412" May 16 10:05:45.215245 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:58290.service - OpenSSH per-connection server daemon (10.0.0.1:58290). May 16 10:05:45.272316 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 58290 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:45.275428 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:45.282872 systemd-logind[1576]: New session 20 of user core. May 16 10:05:45.296700 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 10:05:45.419928 sshd[5606]: Connection closed by 10.0.0.1 port 58290 May 16 10:05:45.420177 sshd-session[5602]: pam_unix(sshd:session): session closed for user core May 16 10:05:45.425749 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:58290.service: Deactivated successfully. May 16 10:05:45.427651 systemd[1]: session-20.scope: Deactivated successfully. May 16 10:05:45.428490 systemd-logind[1576]: Session 20 logged out. Waiting for processes to exit. May 16 10:05:45.429799 systemd-logind[1576]: Removed session 20. May 16 10:05:45.779661 containerd[1591]: time="2025-05-16T10:05:45.779624431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7jrt7,Uid:32e26eae-39a6-476b-8739-ed86db555147,Namespace:calico-system,Attempt:0,}" May 16 10:05:45.896828 systemd-networkd[1496]: calidfd50ccad33: Link UP May 16 10:05:45.897075 systemd-networkd[1496]: calidfd50ccad33: Gained carrier May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.838 [INFO][5619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7jrt7-eth0 csi-node-driver- calico-system 32e26eae-39a6-476b-8739-ed86db555147 643 0 2025-05-16 10:04:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7jrt7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidfd50ccad33 [] []}} ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Namespace="calico-system" Pod="csi-node-driver-7jrt7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7jrt7-" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.838 [INFO][5619] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Namespace="calico-system" Pod="csi-node-driver-7jrt7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7jrt7-eth0" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.864 [INFO][5633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" HandleID="k8s-pod-network.7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Workload="localhost-k8s-csi--node--driver--7jrt7-eth0" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.871 [INFO][5633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" HandleID="k8s-pod-network.7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Workload="localhost-k8s-csi--node--driver--7jrt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027c0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7jrt7", "timestamp":"2025-05-16 10:05:45.864274561 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.871 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.871 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.871 [INFO][5633] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.872 [INFO][5633] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" host="localhost" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.876 [INFO][5633] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.879 [INFO][5633] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.881 [INFO][5633] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.882 [INFO][5633] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.882 [INFO][5633] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" host="localhost" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.884 [INFO][5633] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0 May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.887 [INFO][5633] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" host="localhost" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.891 [INFO][5633] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" host="localhost" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.891 [INFO][5633] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" host="localhost" May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.891 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 10:05:45.909700 containerd[1591]: 2025-05-16 10:05:45.891 [INFO][5633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" HandleID="k8s-pod-network.7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Workload="localhost-k8s-csi--node--driver--7jrt7-eth0" May 16 10:05:45.911662 containerd[1591]: 2025-05-16 10:05:45.894 [INFO][5619] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Namespace="calico-system" Pod="csi-node-driver-7jrt7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7jrt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7jrt7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32e26eae-39a6-476b-8739-ed86db555147", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7jrt7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidfd50ccad33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:45.911662 containerd[1591]: 2025-05-16 10:05:45.894 [INFO][5619] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Namespace="calico-system" Pod="csi-node-driver-7jrt7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7jrt7-eth0" May 16 10:05:45.911662 containerd[1591]: 2025-05-16 10:05:45.894 [INFO][5619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfd50ccad33 ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Namespace="calico-system" Pod="csi-node-driver-7jrt7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7jrt7-eth0" May 16 10:05:45.911662 containerd[1591]: 2025-05-16 10:05:45.897 [INFO][5619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Namespace="calico-system" Pod="csi-node-driver-7jrt7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7jrt7-eth0" May 16 10:05:45.911662 containerd[1591]: 2025-05-16 10:05:45.897 [INFO][5619] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Namespace="calico-system" Pod="csi-node-driver-7jrt7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7jrt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7jrt7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32e26eae-39a6-476b-8739-ed86db555147", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0", Pod:"csi-node-driver-7jrt7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidfd50ccad33", MAC:"62:e5:ac:19:1e:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:45.911662 containerd[1591]: 2025-05-16 10:05:45.906 [INFO][5619] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" Namespace="calico-system" Pod="csi-node-driver-7jrt7" WorkloadEndpoint="localhost-k8s-csi--node--driver--7jrt7-eth0" May 16 10:05:45.955544 containerd[1591]: time="2025-05-16T10:05:45.955452352Z" level=info msg="connecting to shim 7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0" address="unix:///run/containerd/s/68b56635c19c14449b2af7bad256a61418ae1b9038979256d451a82d2af025b1" namespace=k8s.io protocol=ttrpc version=3 May 16 10:05:45.986705 systemd[1]: Started cri-containerd-7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0.scope - libcontainer container 7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0. May 16 10:05:46.002045 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 10:05:46.018924 containerd[1591]: time="2025-05-16T10:05:46.018892199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7jrt7,Uid:32e26eae-39a6-476b-8739-ed86db555147,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0\"" May 16 10:05:46.776233 kubelet[2712]: E0516 10:05:46.776188 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:46.776918 containerd[1591]: time="2025-05-16T10:05:46.776626245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtmhb,Uid:2a935d52-e822-4eeb-870a-53e15a71d983,Namespace:kube-system,Attempt:0,}" May 16 10:05:46.874939 systemd-networkd[1496]: cali8af2db0c4f1: Link UP May 16 10:05:46.875184 systemd-networkd[1496]: cali8af2db0c4f1: Gained carrier May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.811 [INFO][5702] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0 coredns-668d6bf9bc- kube-system 2a935d52-e822-4eeb-870a-53e15a71d983 816 0 2025-05-16 10:04:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mtmhb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8af2db0c4f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtmhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtmhb-" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.811 [INFO][5702] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtmhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.841 [INFO][5718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" HandleID="k8s-pod-network.aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Workload="localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.849 [INFO][5718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" HandleID="k8s-pod-network.aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Workload="localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a5810), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mtmhb", "timestamp":"2025-05-16 10:05:46.841725394 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.849 [INFO][5718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.849 [INFO][5718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.849 [INFO][5718] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.850 [INFO][5718] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" host="localhost" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.853 [INFO][5718] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.856 [INFO][5718] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.858 [INFO][5718] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.860 [INFO][5718] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.860 [INFO][5718] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" host="localhost" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.861 [INFO][5718] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254 May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.865 [INFO][5718] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" host="localhost" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.869 [INFO][5718] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" host="localhost" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.869 [INFO][5718] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" host="localhost" May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.869 [INFO][5718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 10:05:46.888036 containerd[1591]: 2025-05-16 10:05:46.870 [INFO][5718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" HandleID="k8s-pod-network.aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Workload="localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0" May 16 10:05:46.889063 containerd[1591]: 2025-05-16 10:05:46.872 [INFO][5702] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtmhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2a935d52-e822-4eeb-870a-53e15a71d983", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mtmhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8af2db0c4f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:46.889063 containerd[1591]: 2025-05-16 10:05:46.872 [INFO][5702] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtmhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0" May 16 10:05:46.889063 containerd[1591]: 2025-05-16 10:05:46.873 [INFO][5702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8af2db0c4f1 ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtmhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0" May 16 10:05:46.889063 containerd[1591]: 2025-05-16 10:05:46.875 [INFO][5702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtmhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0" May 16 10:05:46.889063 containerd[1591]: 2025-05-16 10:05:46.875 [INFO][5702] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtmhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2a935d52-e822-4eeb-870a-53e15a71d983", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 10, 4, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254", Pod:"coredns-668d6bf9bc-mtmhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8af2db0c4f1", MAC:"fe:e3:0b:23:8d:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 10:05:46.889063 containerd[1591]: 2025-05-16 10:05:46.884 [INFO][5702] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" Namespace="kube-system" Pod="coredns-668d6bf9bc-mtmhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mtmhb-eth0" May 16 10:05:46.911914 containerd[1591]: time="2025-05-16T10:05:46.911871900Z" level=info msg="connecting to shim aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254" address="unix:///run/containerd/s/80a24756e8cd98ee940e507a02e2f2f00fd656d8eedde978d34b7965551a5638" namespace=k8s.io protocol=ttrpc version=3 May 16 10:05:46.940647 systemd[1]: Started cri-containerd-aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254.scope - libcontainer container aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254. May 16 10:05:46.955612 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 10:05:46.987162 containerd[1591]: time="2025-05-16T10:05:46.987118621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mtmhb,Uid:2a935d52-e822-4eeb-870a-53e15a71d983,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254\"" May 16 10:05:46.987972 kubelet[2712]: E0516 10:05:46.987943 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:46.990151 containerd[1591]: time="2025-05-16T10:05:46.990105574Z" level=info msg="CreateContainer within sandbox \"aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 10:05:47.000339 containerd[1591]: time="2025-05-16T10:05:47.000279788Z" level=info msg="Container 698dacfb11348589b176bb4a5896640e7535cc5032f86244a7382f9107429ab3: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:47.006842 containerd[1591]: time="2025-05-16T10:05:47.006800788Z" level=info msg="CreateContainer within sandbox \"aaa30f3f0b09bee370ca4cbdec5c785ecba5b201b8d9cf00073629704b40f254\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"698dacfb11348589b176bb4a5896640e7535cc5032f86244a7382f9107429ab3\"" May 16 10:05:47.007262 containerd[1591]: time="2025-05-16T10:05:47.007241451Z" level=info msg="StartContainer for \"698dacfb11348589b176bb4a5896640e7535cc5032f86244a7382f9107429ab3\"" May 16 10:05:47.008025 containerd[1591]: time="2025-05-16T10:05:47.007975827Z" level=info msg="connecting to shim 698dacfb11348589b176bb4a5896640e7535cc5032f86244a7382f9107429ab3" address="unix:///run/containerd/s/80a24756e8cd98ee940e507a02e2f2f00fd656d8eedde978d34b7965551a5638" protocol=ttrpc version=3 May 16 10:05:47.035650 systemd[1]: Started cri-containerd-698dacfb11348589b176bb4a5896640e7535cc5032f86244a7382f9107429ab3.scope - libcontainer container 698dacfb11348589b176bb4a5896640e7535cc5032f86244a7382f9107429ab3. May 16 10:05:47.042169 containerd[1591]: time="2025-05-16T10:05:47.042041896Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:47.046461 containerd[1591]: time="2025-05-16T10:05:47.046425773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 16 10:05:47.048146 containerd[1591]: time="2025-05-16T10:05:47.048119653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.852986619s" May 16 10:05:47.048530 containerd[1591]: time="2025-05-16T10:05:47.048499630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 16 10:05:47.051133 containerd[1591]: time="2025-05-16T10:05:47.051098076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 16 10:05:47.052154 containerd[1591]: time="2025-05-16T10:05:47.052081654Z" level=info msg="CreateContainer within sandbox \"9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 10:05:47.081827 containerd[1591]: time="2025-05-16T10:05:47.081789803Z" level=info msg="StartContainer for \"698dacfb11348589b176bb4a5896640e7535cc5032f86244a7382f9107429ab3\" returns successfully" May 16 10:05:47.089778 containerd[1591]: time="2025-05-16T10:05:47.089732572Z" level=info msg="Container c1d305f7c9e1d7845c924f47c08a54608868a849c6b933b2cfc1c093911c9253: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:47.098464 kubelet[2712]: E0516 10:05:47.098132 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:47.098611 containerd[1591]: time="2025-05-16T10:05:47.098288909Z" level=info msg="CreateContainer within sandbox \"9f9531049660d2ec6befa0fb9bde34bd8175baf863b3ca527aa62dbb9bb1a62c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c1d305f7c9e1d7845c924f47c08a54608868a849c6b933b2cfc1c093911c9253\"" May 16 10:05:47.098611 containerd[1591]: time="2025-05-16T10:05:47.098560432Z" level=info msg="StartContainer for \"c1d305f7c9e1d7845c924f47c08a54608868a849c6b933b2cfc1c093911c9253\"" May 16 10:05:47.099818 containerd[1591]: time="2025-05-16T10:05:47.099484386Z" level=info msg="connecting to shim c1d305f7c9e1d7845c924f47c08a54608868a849c6b933b2cfc1c093911c9253" address="unix:///run/containerd/s/8b9ac3697be48c3b5209a114f673dd898107c6f7ccb952e5a27bd8f27f3338ac" protocol=ttrpc version=3 May 16 10:05:47.122760 systemd[1]: Started cri-containerd-c1d305f7c9e1d7845c924f47c08a54608868a849c6b933b2cfc1c093911c9253.scope - libcontainer container c1d305f7c9e1d7845c924f47c08a54608868a849c6b933b2cfc1c093911c9253. May 16 10:05:47.179623 containerd[1591]: time="2025-05-16T10:05:47.179560036Z" level=info msg="StartContainer for \"c1d305f7c9e1d7845c924f47c08a54608868a849c6b933b2cfc1c093911c9253\" returns successfully" May 16 10:05:47.836637 systemd-networkd[1496]: calidfd50ccad33: Gained IPv6LL May 16 10:05:48.103347 kubelet[2712]: E0516 10:05:48.102571 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:48.116463 kubelet[2712]: I0516 10:05:48.116380 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mtmhb" podStartSLOduration=83.116336141 podStartE2EDuration="1m23.116336141s" podCreationTimestamp="2025-05-16 10:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 10:05:47.119247586 +0000 UTC m=+87.541391509" watchObservedRunningTime="2025-05-16 10:05:48.116336141 +0000 UTC m=+88.538480054" May 16 10:05:48.117235 kubelet[2712]: I0516 10:05:48.116711 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86d67fd756-ft5g4" podStartSLOduration=66.552562967 podStartE2EDuration="1m17.116705349s" podCreationTimestamp="2025-05-16 10:04:31 +0000 UTC" firstStartedPulling="2025-05-16 10:05:36.486828996 +0000 UTC m=+76.908972909" lastFinishedPulling="2025-05-16 10:05:47.050971358 +0000 UTC m=+87.473115291" observedRunningTime="2025-05-16 10:05:48.114273129 +0000 UTC m=+88.536417042" watchObservedRunningTime="2025-05-16 10:05:48.116705349 +0000 UTC m=+88.538849262" May 16 10:05:48.665705 systemd-networkd[1496]: cali8af2db0c4f1: Gained IPv6LL May 16 10:05:49.103657 kubelet[2712]: E0516 10:05:49.103630 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:49.776679 kubelet[2712]: E0516 10:05:49.776647 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:50.106657 kubelet[2712]: E0516 10:05:50.106506 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:50.443418 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:53242.service - OpenSSH per-connection server daemon (10.0.0.1:53242). May 16 10:05:50.517741 sshd[5873]: Accepted publickey for core from 10.0.0.1 port 53242 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:50.519655 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:50.524436 systemd-logind[1576]: New session 21 of user core. May 16 10:05:50.531651 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 10:05:50.658736 sshd[5875]: Connection closed by 10.0.0.1 port 53242 May 16 10:05:50.659111 sshd-session[5873]: pam_unix(sshd:session): session closed for user core May 16 10:05:50.664117 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:53242.service: Deactivated successfully. May 16 10:05:50.666434 systemd[1]: session-21.scope: Deactivated successfully. May 16 10:05:50.667298 systemd-logind[1576]: Session 21 logged out. Waiting for processes to exit. May 16 10:05:50.668745 systemd-logind[1576]: Removed session 21. May 16 10:05:51.580020 containerd[1591]: time="2025-05-16T10:05:51.579967383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:51.581565 containerd[1591]: time="2025-05-16T10:05:51.581077040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 16 10:05:51.582995 containerd[1591]: time="2025-05-16T10:05:51.582953873Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:51.586535 containerd[1591]: time="2025-05-16T10:05:51.586091029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:51.586666 containerd[1591]: time="2025-05-16T10:05:51.586624372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 4.535383344s" May 16 10:05:51.586666 containerd[1591]: time="2025-05-16T10:05:51.586666171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 16 10:05:51.587980 containerd[1591]: time="2025-05-16T10:05:51.587844999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 16 10:05:51.607572 containerd[1591]: time="2025-05-16T10:05:51.605946707Z" level=info msg="CreateContainer within sandbox \"cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 10:05:51.616768 containerd[1591]: time="2025-05-16T10:05:51.616729188Z" level=info msg="Container 461f7d177fb4bea8ebeecaf4a727e3d899893e05a0e5d7f2ca1c1b5c72566507: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:51.626876 containerd[1591]: time="2025-05-16T10:05:51.626839634Z" level=info msg="CreateContainer within sandbox \"cc030ff45a452b527922c3c4b539112055ecdfdc5d01d04dc8fc7d923ae82698\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"461f7d177fb4bea8ebeecaf4a727e3d899893e05a0e5d7f2ca1c1b5c72566507\"" May 16 10:05:51.627382 containerd[1591]: time="2025-05-16T10:05:51.627339293Z" level=info msg="StartContainer for \"461f7d177fb4bea8ebeecaf4a727e3d899893e05a0e5d7f2ca1c1b5c72566507\"" May 16 10:05:51.628432 containerd[1591]: time="2025-05-16T10:05:51.628390399Z" level=info msg="connecting to shim 461f7d177fb4bea8ebeecaf4a727e3d899893e05a0e5d7f2ca1c1b5c72566507" address="unix:///run/containerd/s/61d314eae7450cc661d1a509173bbef9268299b8524396ead8377fcb3b6fdd49" protocol=ttrpc version=3 May 16 10:05:51.672141 systemd[1]: Started cri-containerd-461f7d177fb4bea8ebeecaf4a727e3d899893e05a0e5d7f2ca1c1b5c72566507.scope - libcontainer container 461f7d177fb4bea8ebeecaf4a727e3d899893e05a0e5d7f2ca1c1b5c72566507. May 16 10:05:51.781585 containerd[1591]: time="2025-05-16T10:05:51.781546350Z" level=info msg="StartContainer for \"461f7d177fb4bea8ebeecaf4a727e3d899893e05a0e5d7f2ca1c1b5c72566507\" returns successfully" May 16 10:05:52.133484 kubelet[2712]: I0516 10:05:52.133027 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57b996595-kz4qp" podStartSLOduration=66.245277007 podStartE2EDuration="1m21.133010159s" podCreationTimestamp="2025-05-16 10:04:31 +0000 UTC" firstStartedPulling="2025-05-16 10:05:36.699717659 +0000 UTC m=+77.121861572" lastFinishedPulling="2025-05-16 10:05:51.587450811 +0000 UTC m=+92.009594724" observedRunningTime="2025-05-16 10:05:52.132667067 +0000 UTC m=+92.554810980" watchObservedRunningTime="2025-05-16 10:05:52.133010159 +0000 UTC m=+92.555154072" May 16 10:05:52.170147 containerd[1591]: time="2025-05-16T10:05:52.170105946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"461f7d177fb4bea8ebeecaf4a727e3d899893e05a0e5d7f2ca1c1b5c72566507\" id:\"a1741a41e84c4c410172ff2add59c10f5fbd02d1b4cb74d9b8a0587b0923fca9\" pid:5940 exited_at:{seconds:1747389952 nanos:169841464}" May 16 10:05:53.702246 containerd[1591]: time="2025-05-16T10:05:53.702183667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:53.703228 containerd[1591]: time="2025-05-16T10:05:53.703189571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 16 10:05:53.704765 containerd[1591]: time="2025-05-16T10:05:53.704720835Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:53.706677 containerd[1591]: time="2025-05-16T10:05:53.706638265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:53.707232 containerd[1591]: time="2025-05-16T10:05:53.707203691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.119331339s" May 16 10:05:53.707287 containerd[1591]: time="2025-05-16T10:05:53.707233919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 16 10:05:53.709122 containerd[1591]: time="2025-05-16T10:05:53.709091514Z" level=info msg="CreateContainer within sandbox \"7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 10:05:54.555663 containerd[1591]: time="2025-05-16T10:05:54.555588972Z" level=info msg="Container 8e65ca2ec85c7318cc6e38a69fb39f6c881988c71c8dcaae831bf95f0d141d92: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:54.670694 containerd[1591]: time="2025-05-16T10:05:54.670627104Z" level=info msg="CreateContainer within sandbox \"7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8e65ca2ec85c7318cc6e38a69fb39f6c881988c71c8dcaae831bf95f0d141d92\"" May 16 10:05:54.671196 containerd[1591]: time="2025-05-16T10:05:54.671164618Z" level=info msg="StartContainer for \"8e65ca2ec85c7318cc6e38a69fb39f6c881988c71c8dcaae831bf95f0d141d92\"" May 16 10:05:54.672808 containerd[1591]: time="2025-05-16T10:05:54.672769276Z" level=info msg="connecting to shim 8e65ca2ec85c7318cc6e38a69fb39f6c881988c71c8dcaae831bf95f0d141d92" address="unix:///run/containerd/s/68b56635c19c14449b2af7bad256a61418ae1b9038979256d451a82d2af025b1" protocol=ttrpc version=3 May 16 10:05:54.698683 systemd[1]: Started cri-containerd-8e65ca2ec85c7318cc6e38a69fb39f6c881988c71c8dcaae831bf95f0d141d92.scope - libcontainer container 8e65ca2ec85c7318cc6e38a69fb39f6c881988c71c8dcaae831bf95f0d141d92. May 16 10:05:54.744642 containerd[1591]: time="2025-05-16T10:05:54.744596752Z" level=info msg="StartContainer for \"8e65ca2ec85c7318cc6e38a69fb39f6c881988c71c8dcaae831bf95f0d141d92\" returns successfully" May 16 10:05:54.747609 containerd[1591]: time="2025-05-16T10:05:54.747559096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 16 10:05:54.779276 kubelet[2712]: E0516 10:05:54.779233 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:55.676154 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:53258.service - OpenSSH per-connection server daemon (10.0.0.1:53258). May 16 10:05:55.740911 sshd[5986]: Accepted publickey for core from 10.0.0.1 port 53258 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:55.743071 sshd-session[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:55.749003 systemd-logind[1576]: New session 22 of user core. May 16 10:05:55.755696 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 10:05:55.882821 sshd[5988]: Connection closed by 10.0.0.1 port 53258 May 16 10:05:55.883215 sshd-session[5986]: pam_unix(sshd:session): session closed for user core May 16 10:05:55.892428 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:53258.service: Deactivated successfully. May 16 10:05:55.894415 systemd[1]: session-22.scope: Deactivated successfully. May 16 10:05:55.895484 systemd-logind[1576]: Session 22 logged out. Waiting for processes to exit. May 16 10:05:55.899738 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:53262.service - OpenSSH per-connection server daemon (10.0.0.1:53262). May 16 10:05:55.900612 systemd-logind[1576]: Removed session 22. May 16 10:05:55.954811 sshd[6002]: Accepted publickey for core from 10.0.0.1 port 53262 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:55.956279 sshd-session[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:55.961295 systemd-logind[1576]: New session 23 of user core. May 16 10:05:55.974796 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 10:05:56.257208 sshd[6004]: Connection closed by 10.0.0.1 port 53262 May 16 10:05:56.257431 sshd-session[6002]: pam_unix(sshd:session): session closed for user core May 16 10:05:56.272207 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:53262.service: Deactivated successfully. May 16 10:05:56.274199 systemd[1]: session-23.scope: Deactivated successfully. May 16 10:05:56.274943 systemd-logind[1576]: Session 23 logged out. Waiting for processes to exit. May 16 10:05:56.277575 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:53268.service - OpenSSH per-connection server daemon (10.0.0.1:53268). May 16 10:05:56.278169 systemd-logind[1576]: Removed session 23. May 16 10:05:56.332302 sshd[6015]: Accepted publickey for core from 10.0.0.1 port 53268 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:56.333829 sshd-session[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:56.339870 systemd-logind[1576]: New session 24 of user core. May 16 10:05:56.351684 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 10:05:56.776101 kubelet[2712]: E0516 10:05:56.776041 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:05:57.387088 sshd[6017]: Connection closed by 10.0.0.1 port 53268 May 16 10:05:57.387848 sshd-session[6015]: pam_unix(sshd:session): session closed for user core May 16 10:05:57.401218 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:53268.service: Deactivated successfully. May 16 10:05:57.405454 systemd[1]: session-24.scope: Deactivated successfully. May 16 10:05:57.409588 systemd-logind[1576]: Session 24 logged out. Waiting for processes to exit. May 16 10:05:57.414786 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:53276.service - OpenSSH per-connection server daemon (10.0.0.1:53276). May 16 10:05:57.416237 systemd-logind[1576]: Removed session 24. May 16 10:05:57.476331 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 53276 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:57.477967 sshd-session[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:57.483021 systemd-logind[1576]: New session 25 of user core. May 16 10:05:57.490673 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 10:05:57.713381 sshd[6047]: Connection closed by 10.0.0.1 port 53276 May 16 10:05:57.714412 sshd-session[6045]: pam_unix(sshd:session): session closed for user core May 16 10:05:57.726302 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:53276.service: Deactivated successfully. May 16 10:05:57.728117 systemd[1]: session-25.scope: Deactivated successfully. May 16 10:05:57.730026 systemd-logind[1576]: Session 25 logged out. Waiting for processes to exit. May 16 10:05:57.732541 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:53292.service - OpenSSH per-connection server daemon (10.0.0.1:53292). May 16 10:05:57.733126 systemd-logind[1576]: Removed session 25. May 16 10:05:57.787871 sshd[6058]: Accepted publickey for core from 10.0.0.1 port 53292 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:05:57.789440 sshd-session[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:05:57.794320 systemd-logind[1576]: New session 26 of user core. May 16 10:05:57.799665 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 10:05:58.183698 sshd[6060]: Connection closed by 10.0.0.1 port 53292 May 16 10:05:58.184202 sshd-session[6058]: pam_unix(sshd:session): session closed for user core May 16 10:05:58.189693 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:53292.service: Deactivated successfully. May 16 10:05:58.193203 systemd[1]: session-26.scope: Deactivated successfully. May 16 10:05:58.195502 systemd-logind[1576]: Session 26 logged out. Waiting for processes to exit. May 16 10:05:58.197833 systemd-logind[1576]: Removed session 26. May 16 10:05:58.332428 containerd[1591]: time="2025-05-16T10:05:58.332330443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:58.333984 containerd[1591]: time="2025-05-16T10:05:58.333938921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 16 10:05:58.335860 containerd[1591]: time="2025-05-16T10:05:58.335815251Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:58.338697 containerd[1591]: time="2025-05-16T10:05:58.338651709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 10:05:58.339437 containerd[1591]: time="2025-05-16T10:05:58.339371588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.591766904s" May 16 10:05:58.339437 containerd[1591]: time="2025-05-16T10:05:58.339406053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 16 10:05:58.341327 containerd[1591]: time="2025-05-16T10:05:58.341297824Z" level=info msg="CreateContainer within sandbox \"7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 10:05:58.351040 containerd[1591]: time="2025-05-16T10:05:58.350988218Z" level=info msg="Container c7c87233418bc08b86134289da4c0cfb01ce518d7fd180133c4a4db4b58ab48f: CDI devices from CRI Config.CDIDevices: []" May 16 10:05:58.365376 containerd[1591]: time="2025-05-16T10:05:58.365322960Z" level=info msg="CreateContainer within sandbox \"7f2b5f257364a4121255cc75b66057a0240ededa09872d31d54a24b6e85909f0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c7c87233418bc08b86134289da4c0cfb01ce518d7fd180133c4a4db4b58ab48f\"" May 16 10:05:58.365856 containerd[1591]: time="2025-05-16T10:05:58.365836914Z" level=info msg="StartContainer for \"c7c87233418bc08b86134289da4c0cfb01ce518d7fd180133c4a4db4b58ab48f\"" May 16 10:05:58.367088 containerd[1591]: time="2025-05-16T10:05:58.367067037Z" level=info msg="connecting to shim c7c87233418bc08b86134289da4c0cfb01ce518d7fd180133c4a4db4b58ab48f" address="unix:///run/containerd/s/68b56635c19c14449b2af7bad256a61418ae1b9038979256d451a82d2af025b1" protocol=ttrpc version=3 May 16 10:05:58.392734 systemd[1]: Started cri-containerd-c7c87233418bc08b86134289da4c0cfb01ce518d7fd180133c4a4db4b58ab48f.scope - libcontainer container c7c87233418bc08b86134289da4c0cfb01ce518d7fd180133c4a4db4b58ab48f. May 16 10:05:58.437589 containerd[1591]: time="2025-05-16T10:05:58.437489485Z" level=info msg="StartContainer for \"c7c87233418bc08b86134289da4c0cfb01ce518d7fd180133c4a4db4b58ab48f\" returns successfully" May 16 10:05:58.889623 kubelet[2712]: I0516 10:05:58.889568 2712 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 10:05:58.889623 kubelet[2712]: I0516 10:05:58.889610 2712 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 10:06:03.209073 systemd[1]: Started sshd@26-10.0.0.79:22-10.0.0.1:46092.service - OpenSSH per-connection server daemon (10.0.0.1:46092). May 16 10:06:03.267551 sshd[6110]: Accepted publickey for core from 10.0.0.1 port 46092 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:06:03.269497 sshd-session[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:06:03.273980 systemd-logind[1576]: New session 27 of user core. May 16 10:06:03.283729 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 10:06:03.404175 sshd[6112]: Connection closed by 10.0.0.1 port 46092 May 16 10:06:03.405177 containerd[1591]: time="2025-05-16T10:06:03.404507825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"461f7d177fb4bea8ebeecaf4a727e3d899893e05a0e5d7f2ca1c1b5c72566507\" id:\"84a4fcb42fb497695277d24ca9e8d2a9c031689248ff3dfbdd138a57953f2c5f\" pid:6135 exited_at:{seconds:1747389963 nanos:403829290}" May 16 10:06:03.404670 sshd-session[6110]: pam_unix(sshd:session): session closed for user core May 16 10:06:03.409491 systemd[1]: sshd@26-10.0.0.79:22-10.0.0.1:46092.service: Deactivated successfully. May 16 10:06:03.412016 systemd[1]: session-27.scope: Deactivated successfully. May 16 10:06:03.412817 systemd-logind[1576]: Session 27 logged out. Waiting for processes to exit. May 16 10:06:03.414032 systemd-logind[1576]: Removed session 27. May 16 10:06:06.153328 containerd[1591]: time="2025-05-16T10:06:06.153277843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b18b92192dee10a4c915deae61d75784cd2fd53a797e1badfe9825977ea3ebb1\" id:\"c3b194efdbe4a3409e9b585ffc8bdf7e8b1a839882f56720b7cba710d23be509\" pid:6160 exited_at:{seconds:1747389966 nanos:153010938}" May 16 10:06:06.155278 kubelet[2712]: E0516 10:06:06.155252 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 10:06:06.190826 kubelet[2712]: I0516 10:06:06.190741 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7jrt7" podStartSLOduration=82.871264827 podStartE2EDuration="1m35.190725596s" podCreationTimestamp="2025-05-16 10:04:31 +0000 UTC" firstStartedPulling="2025-05-16 10:05:46.020689899 +0000 UTC m=+86.442833812" lastFinishedPulling="2025-05-16 10:05:58.340150668 +0000 UTC m=+98.762294581" observedRunningTime="2025-05-16 10:05:59.150175463 +0000 UTC m=+99.572319386" watchObservedRunningTime="2025-05-16 10:06:06.190725596 +0000 UTC m=+106.612869509" May 16 10:06:08.420563 systemd[1]: Started sshd@27-10.0.0.79:22-10.0.0.1:53186.service - OpenSSH per-connection server daemon (10.0.0.1:53186). May 16 10:06:08.479973 sshd[6176]: Accepted publickey for core from 10.0.0.1 port 53186 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:06:08.481677 sshd-session[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:06:08.487348 systemd-logind[1576]: New session 28 of user core. May 16 10:06:08.494656 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 10:06:08.612784 sshd[6179]: Connection closed by 10.0.0.1 port 53186 May 16 10:06:08.613097 sshd-session[6176]: pam_unix(sshd:session): session closed for user core May 16 10:06:08.618347 systemd[1]: sshd@27-10.0.0.79:22-10.0.0.1:53186.service: Deactivated successfully. May 16 10:06:08.621248 systemd[1]: session-28.scope: Deactivated successfully. May 16 10:06:08.623273 systemd-logind[1576]: Session 28 logged out. Waiting for processes to exit. May 16 10:06:08.625948 systemd-logind[1576]: Removed session 28. May 16 10:06:13.626621 systemd[1]: Started sshd@28-10.0.0.79:22-10.0.0.1:53190.service - OpenSSH per-connection server daemon (10.0.0.1:53190). May 16 10:06:13.677482 sshd[6194]: Accepted publickey for core from 10.0.0.1 port 53190 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:06:13.679422 sshd-session[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:06:13.685013 systemd-logind[1576]: New session 29 of user core. May 16 10:06:13.696644 systemd[1]: Started session-29.scope - Session 29 of User core. May 16 10:06:13.814228 sshd[6196]: Connection closed by 10.0.0.1 port 53190 May 16 10:06:13.814532 sshd-session[6194]: pam_unix(sshd:session): session closed for user core May 16 10:06:13.818646 systemd[1]: sshd@28-10.0.0.79:22-10.0.0.1:53190.service: Deactivated successfully. May 16 10:06:13.820877 systemd[1]: session-29.scope: Deactivated successfully. May 16 10:06:13.822111 systemd-logind[1576]: Session 29 logged out. Waiting for processes to exit. May 16 10:06:13.823461 systemd-logind[1576]: Removed session 29. May 16 10:06:18.827136 systemd[1]: Started sshd@29-10.0.0.79:22-10.0.0.1:54534.service - OpenSSH per-connection server daemon (10.0.0.1:54534). May 16 10:06:18.880993 sshd[6216]: Accepted publickey for core from 10.0.0.1 port 54534 ssh2: RSA SHA256:TkuFkvH6sCJ3kuKrabiD7Z8ORwd+XoH0QjfS0JDvRdI May 16 10:06:18.882280 sshd-session[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 10:06:18.886326 systemd-logind[1576]: New session 30 of user core. May 16 10:06:18.894626 systemd[1]: Started session-30.scope - Session 30 of User core. May 16 10:06:19.005235 sshd[6218]: Connection closed by 10.0.0.1 port 54534 May 16 10:06:19.010187 systemd[1]: sshd@29-10.0.0.79:22-10.0.0.1:54534.service: Deactivated successfully. May 16 10:06:19.005533 sshd-session[6216]: pam_unix(sshd:session): session closed for user core May 16 10:06:19.012236 systemd[1]: session-30.scope: Deactivated successfully. May 16 10:06:19.013006 systemd-logind[1576]: Session 30 logged out. Waiting for processes to exit. May 16 10:06:19.014118 systemd-logind[1576]: Removed session 30. May 16 10:06:19.774935 containerd[1591]: time="2025-05-16T10:06:19.774712482Z" level=info msg="StopPodSandbox for \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\"" May 16 10:06:19.774935 containerd[1591]: time="2025-05-16T10:06:19.774861171Z" level=info msg="TearDown network for sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" successfully" May 16 10:06:19.774935 containerd[1591]: time="2025-05-16T10:06:19.774871080Z" level=info msg="StopPodSandbox for \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" returns successfully" May 16 10:06:19.775895 containerd[1591]: time="2025-05-16T10:06:19.775836436Z" level=info msg="RemovePodSandbox for \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\"" May 16 10:06:19.784440 containerd[1591]: time="2025-05-16T10:06:19.784396460Z" level=info msg="Forcibly stopping sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\"" May 16 10:06:19.784575 containerd[1591]: time="2025-05-16T10:06:19.784552252Z" level=info msg="TearDown network for sandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" successfully" May 16 10:06:19.790160 containerd[1591]: time="2025-05-16T10:06:19.790128708Z" level=info msg="Ensure that sandbox c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85 in task-service has been cleanup successfully" May 16 10:06:19.793769 containerd[1591]: time="2025-05-16T10:06:19.793730418Z" level=info msg="RemovePodSandbox \"c834442d2f5ba39b5f2ec0546168c6bf29bbbe0eda1b63f79f453724f7dedd85\" returns successfully"